Feb 8 23:28:41.934570 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:28:41.934617 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:41.934644 kernel: BIOS-provided physical RAM map: Feb 8 23:28:41.934662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 8 23:28:41.934679 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 8 23:28:41.934695 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 8 23:28:41.934715 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Feb 8 23:28:41.934732 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Feb 8 23:28:41.934752 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 8 23:28:41.934768 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 8 23:28:41.934784 kernel: NX (Execute Disable) protection: active Feb 8 23:28:41.934800 kernel: SMBIOS 2.8 present. Feb 8 23:28:41.934816 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014 Feb 8 23:28:41.934833 kernel: Hypervisor detected: KVM Feb 8 23:28:41.934853 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 8 23:28:41.934875 kernel: kvm-clock: cpu 0, msr 5efaa001, primary cpu clock Feb 8 23:28:41.934892 kernel: kvm-clock: using sched offset of 5045737317 cycles Feb 8 23:28:41.934911 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 8 23:28:41.934930 kernel: tsc: Detected 1996.249 MHz processor Feb 8 23:28:41.934949 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:28:41.934968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:28:41.934986 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Feb 8 23:28:41.935004 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:28:41.935026 kernel: ACPI: Early table checksum verification disabled Feb 8 23:28:41.935044 kernel: ACPI: RSDP 0x00000000000F5930 000014 (v00 BOCHS ) Feb 8 23:28:41.935062 kernel: ACPI: RSDT 0x000000007FFE1848 000030 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:41.935081 kernel: ACPI: FACP 0x000000007FFE172C 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:41.935099 kernel: ACPI: DSDT 0x000000007FFE0040 0016EC (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:41.935117 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 8 23:28:41.935135 kernel: ACPI: APIC 0x000000007FFE17A0 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:41.935153 kernel: ACPI: WAET 0x000000007FFE1820 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 8 23:28:41.935171 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe172c-0x7ffe179f] Feb 8 23:28:41.935192 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe172b] Feb 8 23:28:41.935210 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 8 23:28:41.941304 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17a0-0x7ffe181f] Feb 8 23:28:41.941327 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe1820-0x7ffe1847] Feb 8 23:28:41.941346 kernel: No NUMA configuration found Feb 8 23:28:41.941365 kernel: Faking a node at [mem 0x0000000000000000-0x000000007ffdcfff] Feb 8 23:28:41.941383 kernel: NODE_DATA(0) allocated [mem 0x7ffd7000-0x7ffdcfff] Feb 8 23:28:41.941402 kernel: Zone ranges: Feb 8 23:28:41.941437 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:28:41.941456 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdcfff] Feb 8 23:28:41.941475 kernel: Normal empty Feb 8 23:28:41.941493 kernel: Movable zone start for each node Feb 8 23:28:41.941512 kernel: Early memory node ranges Feb 8 23:28:41.941531 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 8 23:28:41.941552 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Feb 8 23:28:41.941571 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdcfff] Feb 8 23:28:41.941590 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:28:41.941609 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 8 23:28:41.941627 kernel: On node 0, zone DMA32: 35 pages in unavailable ranges Feb 8 23:28:41.941646 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 8 23:28:41.941664 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 8 23:28:41.941683 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:28:41.941702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 8 23:28:41.941724 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 8 23:28:41.941743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:28:41.941762 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 8 23:28:41.941780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 8 23:28:41.941799 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:28:41.941818 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:28:41.941837 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 8 23:28:41.941855 kernel: Booting paravirtualized kernel on KVM Feb 8 23:28:41.941874 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:28:41.941894 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:28:41.941916 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:28:41.941935 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:28:41.941954 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:28:41.941972 kernel: kvm-guest: stealtime: cpu 0, msr 7dc1c0c0 Feb 8 23:28:41.941991 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 8 23:28:41.942010 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515805 Feb 8 23:28:41.942029 kernel: Policy zone: DMA32 Feb 8 23:28:41.942051 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:41.942075 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:28:41.942094 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 8 23:28:41.942114 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:28:41.942133 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:28:41.942153 kernel: Memory: 1975340K/2096620K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121020K reserved, 0K cma-reserved) Feb 8 23:28:41.942171 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:28:41.942191 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:28:41.942209 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:28:41.942318 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:28:41.942340 kernel: rcu: RCU event tracing is enabled. Feb 8 23:28:41.942360 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:28:41.942379 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:28:41.942398 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:28:41.942417 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:28:41.942436 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:28:41.942455 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 8 23:28:41.942474 kernel: Console: colour VGA+ 80x25 Feb 8 23:28:41.942497 kernel: printk: console [tty0] enabled Feb 8 23:28:41.942516 kernel: printk: console [ttyS0] enabled Feb 8 23:28:41.942534 kernel: ACPI: Core revision 20210730 Feb 8 23:28:41.942553 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:28:41.942572 kernel: x2apic enabled Feb 8 23:28:41.942591 kernel: Switched APIC routing to physical x2apic. Feb 8 23:28:41.942610 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 8 23:28:41.942629 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 8 23:28:41.942648 kernel: Calibrating delay loop (skipped) preset value.. 3992.49 BogoMIPS (lpj=1996249) Feb 8 23:28:41.942667 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 8 23:28:41.942689 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 8 23:28:41.942709 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:28:41.942728 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:28:41.942747 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:28:41.942766 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:28:41.942784 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:28:41.942803 kernel: x86/fpu: x87 FPU will use FXSAVE Feb 8 23:28:41.942821 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:28:41.942840 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:28:41.942862 kernel: LSM: Security Framework initializing Feb 8 23:28:41.942881 kernel: SELinux: Initializing. Feb 8 23:28:41.942900 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:28:41.942919 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 8 23:28:41.942938 kernel: smpboot: CPU0: AMD Intel Core i7 9xx (Nehalem Class Core i7) (family: 0x6, model: 0x1a, stepping: 0x3) Feb 8 23:28:41.942957 kernel: Performance Events: AMD PMU driver. Feb 8 23:28:41.942975 kernel: ... version: 0 Feb 8 23:28:41.942994 kernel: ... bit width: 48 Feb 8 23:28:41.943013 kernel: ... generic registers: 4 Feb 8 23:28:41.943052 kernel: ... value mask: 0000ffffffffffff Feb 8 23:28:41.943072 kernel: ... max period: 00007fffffffffff Feb 8 23:28:41.943096 kernel: ... fixed-purpose events: 0 Feb 8 23:28:41.943115 kernel: ... event mask: 000000000000000f Feb 8 23:28:41.943135 kernel: signal: max sigframe size: 1440 Feb 8 23:28:41.943154 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:28:41.943174 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:28:41.943310 kernel: x86: Booting SMP configuration: Feb 8 23:28:41.943370 kernel: .... node #0, CPUs: #1 Feb 8 23:28:41.943391 kernel: kvm-clock: cpu 1, msr 5efaa041, secondary cpu clock Feb 8 23:28:41.943411 kernel: kvm-guest: stealtime: cpu 1, msr 7dd1c0c0 Feb 8 23:28:41.943431 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:28:41.943450 kernel: smpboot: Max logical packages: 2 Feb 8 23:28:41.943470 kernel: smpboot: Total of 2 processors activated (7984.99 BogoMIPS) Feb 8 23:28:41.943490 kernel: devtmpfs: initialized Feb 8 23:28:41.943509 kernel: x86/mm: Memory block size: 128MB Feb 8 23:28:41.943530 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:28:41.943554 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:28:41.943574 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:28:41.943594 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:28:41.943613 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:28:41.943633 kernel: audit: type=2000 audit(1707434920.808:1): state=initialized audit_enabled=0 res=1 Feb 8 23:28:41.943653 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:28:41.943672 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:28:41.943692 kernel: cpuidle: using governor menu Feb 8 23:28:41.943711 kernel: ACPI: bus type PCI registered Feb 8 23:28:41.943734 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:28:41.943754 kernel: dca service started, version 1.12.1 Feb 8 23:28:41.943773 kernel: PCI: Using configuration type 1 for base access Feb 8 23:28:41.943793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:28:41.943813 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:28:41.943833 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:28:41.943853 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:28:41.943872 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:28:41.943891 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:28:41.943915 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:28:41.943935 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:28:41.943955 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:28:41.943975 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:28:41.943994 kernel: ACPI: Interpreter enabled Feb 8 23:28:41.944013 kernel: ACPI: PM: (supports S0 S3 S5) Feb 8 23:28:41.944033 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:28:41.944054 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:28:41.944073 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 8 23:28:41.944096 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 8 23:28:41.944427 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 8 23:28:41.944641 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 8 23:28:41.944672 kernel: acpiphp: Slot [3] registered Feb 8 23:28:41.944693 kernel: acpiphp: Slot [4] registered Feb 8 23:28:41.944713 kernel: acpiphp: Slot [5] registered Feb 8 23:28:41.944732 kernel: acpiphp: Slot [6] registered Feb 8 23:28:41.944759 kernel: acpiphp: Slot [7] registered Feb 8 23:28:41.944778 kernel: acpiphp: Slot [8] registered Feb 8 23:28:41.944798 kernel: acpiphp: Slot [9] registered Feb 8 23:28:41.944817 kernel: acpiphp: Slot [10] registered Feb 8 23:28:41.944837 kernel: acpiphp: Slot [11] registered Feb 8 23:28:41.944857 kernel: acpiphp: Slot [12] registered Feb 8 23:28:41.944876 kernel: acpiphp: Slot [13] registered Feb 8 23:28:41.944896 kernel: acpiphp: Slot [14] registered Feb 8 23:28:41.944915 kernel: acpiphp: Slot [15] registered Feb 8 23:28:41.944934 kernel: acpiphp: Slot [16] registered Feb 8 23:28:41.944957 kernel: acpiphp: Slot [17] registered Feb 8 23:28:41.944977 kernel: acpiphp: Slot [18] registered Feb 8 23:28:41.944996 kernel: acpiphp: Slot [19] registered Feb 8 23:28:41.945016 kernel: acpiphp: Slot [20] registered Feb 8 23:28:41.945035 kernel: acpiphp: Slot [21] registered Feb 8 23:28:41.945054 kernel: acpiphp: Slot [22] registered Feb 8 23:28:41.945074 kernel: acpiphp: Slot [23] registered Feb 8 23:28:41.945093 kernel: acpiphp: Slot [24] registered Feb 8 23:28:41.945113 kernel: acpiphp: Slot [25] registered Feb 8 23:28:41.945137 kernel: acpiphp: Slot [26] registered Feb 8 23:28:41.945156 kernel: acpiphp: Slot [27] registered Feb 8 23:28:41.945175 kernel: acpiphp: Slot [28] registered Feb 8 23:28:41.945194 kernel: acpiphp: Slot [29] registered Feb 8 23:28:41.945214 kernel: acpiphp: Slot [30] registered Feb 8 23:28:41.947314 kernel: acpiphp: Slot [31] registered Feb 8 23:28:41.947355 kernel: PCI host bridge to bus 0000:00 Feb 8 23:28:41.947647 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 8 23:28:41.947841 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 8 23:28:41.948035 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 8 23:28:41.948211 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 8 23:28:41.948445 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 8 23:28:41.948619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 8 23:28:41.948849 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 8 23:28:41.949074 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 8 23:28:41.949350 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 8 23:28:41.949568 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc120-0xc12f] Feb 8 23:28:41.949775 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 8 23:28:41.949998 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 8 23:28:41.950210 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 8 23:28:41.950460 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 8 23:28:41.950682 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 8 23:28:41.950901 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 8 23:28:41.951106 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 8 23:28:41.955487 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 8 23:28:41.955711 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 8 23:28:41.955921 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 8 23:28:41.956123 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff] Feb 8 23:28:41.958449 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref] Feb 8 23:28:41.958669 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 8 23:28:41.958891 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 8 23:28:41.959095 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc0bf] Feb 8 23:28:41.959378 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff] Feb 8 23:28:41.959588 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 8 23:28:41.959788 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref] Feb 8 23:28:41.960013 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 8 23:28:41.960188 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 8 23:28:41.960373 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff] Feb 8 23:28:41.960528 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 8 23:28:41.960729 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 Feb 8 23:28:41.960883 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0c0-0xc0ff] Feb 8 23:28:41.961031 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 8 23:28:41.961240 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 Feb 8 23:28:41.961406 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc100-0xc11f] Feb 8 23:28:41.961557 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 8 23:28:41.961579 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 8 23:28:41.961595 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 8 23:28:41.961610 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 8 23:28:41.961625 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 8 23:28:41.961640 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 8 23:28:41.961661 kernel: iommu: Default domain type: Translated Feb 8 23:28:41.961676 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:28:41.961827 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 8 23:28:41.961978 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 8 23:28:41.962129 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 8 23:28:41.962151 kernel: vgaarb: loaded Feb 8 23:28:41.962166 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:28:41.962182 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:28:41.962197 kernel: PTP clock support registered Feb 8 23:28:41.965250 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:28:41.965264 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 8 23:28:41.965273 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 8 23:28:41.965281 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Feb 8 23:28:41.965289 kernel: clocksource: Switched to clocksource kvm-clock Feb 8 23:28:41.965297 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:28:41.965306 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:28:41.965314 kernel: pnp: PnP ACPI init Feb 8 23:28:41.965409 kernel: pnp 00:03: [dma 2] Feb 8 23:28:41.965425 kernel: pnp: PnP ACPI: found 5 devices Feb 8 23:28:41.965434 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:28:41.965443 kernel: NET: Registered PF_INET protocol family Feb 8 23:28:41.965451 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 8 23:28:41.965459 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 8 23:28:41.965467 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:28:41.965475 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:28:41.965483 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 8 23:28:41.965493 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 8 23:28:41.965501 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:28:41.965509 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 8 23:28:41.965517 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:28:41.965525 kernel: NET: Registered PF_XDP protocol family Feb 8 23:28:41.965600 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 8 23:28:41.965675 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 8 23:28:41.965749 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 8 23:28:41.965819 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 8 23:28:41.965904 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 8 23:28:41.965990 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 8 23:28:41.966074 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 8 23:28:41.966157 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 8 23:28:41.966168 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:28:41.966177 kernel: Initialise system trusted keyrings Feb 8 23:28:41.966185 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 8 23:28:41.966197 kernel: Key type asymmetric registered Feb 8 23:28:41.966206 kernel: Asymmetric key parser 'x509' registered Feb 8 23:28:41.966228 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:28:41.966237 kernel: io scheduler mq-deadline registered Feb 8 23:28:41.966245 kernel: io scheduler kyber registered Feb 8 23:28:41.966253 kernel: io scheduler bfq registered Feb 8 23:28:41.966261 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:28:41.966270 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 8 23:28:41.966278 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 8 23:28:41.966286 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 8 23:28:41.966296 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 8 23:28:41.966304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:28:41.966312 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:28:41.966320 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 8 23:28:41.966328 kernel: random: crng init done Feb 8 23:28:41.966336 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 8 23:28:41.966344 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 8 23:28:41.966352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 8 23:28:41.966446 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 8 23:28:41.966528 kernel: rtc_cmos 00:04: registered as rtc0 Feb 8 23:28:41.966603 kernel: rtc_cmos 00:04: setting system clock to 2024-02-08T23:28:41 UTC (1707434921) Feb 8 23:28:41.966677 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 8 23:28:41.966688 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:28:41.966696 kernel: Segment Routing with IPv6 Feb 8 23:28:41.966704 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:28:41.966712 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:28:41.966720 kernel: Key type dns_resolver registered Feb 8 23:28:41.966731 kernel: IPI shorthand broadcast: enabled Feb 8 23:28:41.966739 kernel: sched_clock: Marking stable (726480067, 122867090)->(916867775, -67520618) Feb 8 23:28:41.966746 kernel: registered taskstats version 1 Feb 8 23:28:41.966755 kernel: Loading compiled-in X.509 certificates Feb 8 23:28:41.966763 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:28:41.966771 kernel: Key type .fscrypt registered Feb 8 23:28:41.966779 kernel: Key type fscrypt-provisioning registered Feb 8 23:28:41.966787 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:28:41.966796 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:28:41.966804 kernel: ima: No architecture policies found Feb 8 23:28:41.966812 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:28:41.966820 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:28:41.966828 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:28:41.966836 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:28:41.966844 kernel: Run /init as init process Feb 8 23:28:41.966852 kernel: with arguments: Feb 8 23:28:41.966860 kernel: /init Feb 8 23:28:41.966871 kernel: with environment: Feb 8 23:28:41.966879 kernel: HOME=/ Feb 8 23:28:41.966886 kernel: TERM=linux Feb 8 23:28:41.966894 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:28:41.966905 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:28:41.966916 systemd[1]: Detected virtualization kvm. Feb 8 23:28:41.966925 systemd[1]: Detected architecture x86-64. Feb 8 23:28:41.966934 systemd[1]: Running in initrd. Feb 8 23:28:41.966944 systemd[1]: No hostname configured, using default hostname. Feb 8 23:28:41.966953 systemd[1]: Hostname set to . Feb 8 23:28:41.966962 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:28:41.966971 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:28:41.966979 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:28:41.966988 systemd[1]: Reached target cryptsetup.target. Feb 8 23:28:41.966996 systemd[1]: Reached target paths.target. Feb 8 23:28:41.967004 systemd[1]: Reached target slices.target. Feb 8 23:28:41.967014 systemd[1]: Reached target swap.target. Feb 8 23:28:41.967022 systemd[1]: Reached target timers.target. Feb 8 23:28:41.967032 systemd[1]: Listening on iscsid.socket. Feb 8 23:28:41.967040 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:28:41.967049 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:28:41.967057 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:28:41.967066 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:28:41.967075 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:28:41.967085 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:28:41.967094 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:28:41.967103 systemd[1]: Reached target sockets.target. Feb 8 23:28:41.967112 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:28:41.967131 systemd[1]: Finished network-cleanup.service. Feb 8 23:28:41.967142 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:28:41.967152 systemd[1]: Starting systemd-journald.service... Feb 8 23:28:41.967161 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:28:41.967170 systemd[1]: Starting systemd-resolved.service... Feb 8 23:28:41.967179 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:28:41.967188 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:28:41.967196 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:28:41.967209 systemd-journald[184]: Journal started Feb 8 23:28:41.969291 systemd-journald[184]: Runtime Journal (/run/log/journal/f64988f81ad645c9a7e9a06b750df4b7) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:28:41.937776 systemd-modules-load[185]: Inserted module 'overlay' Feb 8 23:28:42.000143 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:28:42.000173 kernel: Bridge firewalling registered Feb 8 23:28:41.982689 systemd-resolved[186]: Positive Trust Anchors: Feb 8 23:28:42.005295 systemd[1]: Started systemd-journald.service. Feb 8 23:28:42.005315 kernel: audit: type=1130 audit(1707434922.000:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:41.982701 systemd-resolved[186]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:28:42.010777 kernel: audit: type=1130 audit(1707434922.005:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:41.982736 systemd-resolved[186]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:28:42.016526 kernel: audit: type=1130 audit(1707434922.011:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:41.990981 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 8 23:28:42.020647 kernel: audit: type=1130 audit(1707434922.016:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:41.994570 systemd-resolved[186]: Defaulting to hostname 'linux'. Feb 8 23:28:42.005871 systemd[1]: Started systemd-resolved.service. Feb 8 23:28:42.011539 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:28:42.017142 systemd[1]: Reached target nss-lookup.target. Feb 8 23:28:42.021809 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:28:42.022947 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:28:42.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.030205 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:28:42.036546 kernel: audit: type=1130 audit(1707434922.030:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.036570 kernel: SCSI subsystem initialized Feb 8 23:28:42.038775 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:28:42.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.044245 kernel: audit: type=1130 audit(1707434922.039:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.044704 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:28:42.053255 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:28:42.057128 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:28:42.057153 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:28:42.057487 dracut-cmdline[202]: dracut-dracut-053 Feb 8 23:28:42.060286 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:28:42.062390 systemd-modules-load[185]: Inserted module 'dm_multipath' Feb 8 23:28:42.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.063773 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:28:42.070430 kernel: audit: type=1130 audit(1707434922.064:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.066184 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:28:42.078272 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:28:42.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.083306 kernel: audit: type=1130 audit(1707434922.078:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.137283 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:28:42.151316 kernel: iscsi: registered transport (tcp) Feb 8 23:28:42.179653 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:28:42.179783 kernel: QLogic iSCSI HBA Driver Feb 8 23:28:42.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.239451 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:28:42.246296 kernel: audit: type=1130 audit(1707434922.240:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.242732 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:28:42.311365 kernel: raid6: sse2x4 gen() 12859 MB/s Feb 8 23:28:42.328324 kernel: raid6: sse2x4 xor() 7202 MB/s Feb 8 23:28:42.345309 kernel: raid6: sse2x2 gen() 14383 MB/s Feb 8 23:28:42.362326 kernel: raid6: sse2x2 xor() 8603 MB/s Feb 8 23:28:42.379312 kernel: raid6: sse2x1 gen() 10973 MB/s Feb 8 23:28:42.397115 kernel: raid6: sse2x1 xor() 6602 MB/s Feb 8 23:28:42.397192 kernel: raid6: using algorithm sse2x2 gen() 14383 MB/s Feb 8 23:28:42.397257 kernel: raid6: .... xor() 8603 MB/s, rmw enabled Feb 8 23:28:42.397944 kernel: raid6: using ssse3x2 recovery algorithm Feb 8 23:28:42.413791 kernel: xor: measuring software checksum speed Feb 8 23:28:42.413850 kernel: prefetch64-sse : 17233 MB/sec Feb 8 23:28:42.416256 kernel: generic_sse : 15697 MB/sec Feb 8 23:28:42.416313 kernel: xor: using function: prefetch64-sse (17233 MB/sec) Feb 8 23:28:42.535284 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:28:42.551010 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:28:42.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.551000 audit: BPF prog-id=7 op=LOAD Feb 8 23:28:42.551000 audit: BPF prog-id=8 op=LOAD Feb 8 23:28:42.552551 systemd[1]: Starting systemd-udevd.service... Feb 8 23:28:42.566128 systemd-udevd[385]: Using default interface naming scheme 'v252'. Feb 8 23:28:42.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.571138 systemd[1]: Started systemd-udevd.service. Feb 8 23:28:42.574133 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:28:42.600040 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 8 23:28:42.646847 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:28:42.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.650345 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:28:42.691583 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:28:42.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:42.764250 kernel: virtio_blk virtio2: [vda] 41943040 512-byte logical blocks (21.5 GB/20.0 GiB) Feb 8 23:28:42.775253 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 8 23:28:42.775286 kernel: GPT:17805311 != 41943039 Feb 8 23:28:42.775297 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 8 23:28:42.775309 kernel: GPT:17805311 != 41943039 Feb 8 23:28:42.775319 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 8 23:28:42.775341 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:42.795258 kernel: libata version 3.00 loaded. Feb 8 23:28:42.801252 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (431) Feb 8 23:28:42.802253 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 8 23:28:42.805242 kernel: scsi host0: ata_piix Feb 8 23:28:42.806245 kernel: scsi host1: ata_piix Feb 8 23:28:42.806411 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc120 irq 14 Feb 8 23:28:42.806426 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc128 irq 15 Feb 8 23:28:42.810091 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:28:42.853047 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:28:42.853589 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:28:42.862248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:28:42.866186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:28:42.867471 systemd[1]: Starting disk-uuid.service... Feb 8 23:28:42.878616 disk-uuid[461]: Primary Header is updated. Feb 8 23:28:42.878616 disk-uuid[461]: Secondary Entries is updated. Feb 8 23:28:42.878616 disk-uuid[461]: Secondary Header is updated. Feb 8 23:28:42.887262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:42.892261 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:43.907284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 8 23:28:43.907877 disk-uuid[462]: The operation has completed successfully. Feb 8 23:28:43.988606 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:28:43.990598 systemd[1]: Finished disk-uuid.service. Feb 8 23:28:43.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:43.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.002627 systemd[1]: Starting verity-setup.service... Feb 8 23:28:44.031269 kernel: device-mapper: verity: sha256 using implementation "sha256-ssse3" Feb 8 23:28:44.112785 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:28:44.118476 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:28:44.123515 systemd[1]: Finished verity-setup.service. Feb 8 23:28:44.124000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.274288 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:28:44.275183 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:28:44.275854 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:28:44.276770 systemd[1]: Starting ignition-setup.service... Feb 8 23:28:44.277997 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:28:44.306115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:28:44.306161 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:28:44.306172 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:28:44.330696 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:28:44.344981 systemd[1]: Finished ignition-setup.service. Feb 8 23:28:44.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.346435 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:28:44.380612 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:28:44.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.381000 audit: BPF prog-id=9 op=LOAD Feb 8 23:28:44.382643 systemd[1]: Starting systemd-networkd.service... Feb 8 23:28:44.409082 systemd-networkd[633]: lo: Link UP Feb 8 23:28:44.409094 systemd-networkd[633]: lo: Gained carrier Feb 8 23:28:44.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.409823 systemd-networkd[633]: Enumeration completed Feb 8 23:28:44.410266 systemd-networkd[633]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:28:44.411699 systemd-networkd[633]: eth0: Link UP Feb 8 23:28:44.411703 systemd-networkd[633]: eth0: Gained carrier Feb 8 23:28:44.412030 systemd[1]: Started systemd-networkd.service. Feb 8 23:28:44.415915 systemd[1]: Reached target network.target. Feb 8 23:28:44.420838 systemd[1]: Starting iscsiuio.service... Feb 8 23:28:44.423246 systemd-networkd[633]: eth0: DHCPv4 address 172.24.4.155/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:28:44.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.434918 systemd[1]: Started iscsiuio.service. Feb 8 23:28:44.437873 systemd[1]: Starting iscsid.service... Feb 8 23:28:44.441552 iscsid[638]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:28:44.441552 iscsid[638]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:28:44.441552 iscsid[638]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:28:44.441552 iscsid[638]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:28:44.441552 iscsid[638]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:28:44.441552 iscsid[638]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:28:44.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.443170 systemd[1]: Started iscsid.service. Feb 8 23:28:44.445421 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:28:44.457140 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:28:44.457674 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:28:44.458128 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:28:44.459299 systemd[1]: Reached target remote-fs.target. Feb 8 23:28:44.460902 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:28:44.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.469867 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:28:44.716174 ignition[599]: Ignition 2.14.0 Feb 8 23:28:44.716205 ignition[599]: Stage: fetch-offline Feb 8 23:28:44.716384 ignition[599]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:44.716432 ignition[599]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:44.718815 ignition[599]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:44.719031 ignition[599]: parsed url from cmdline: "" Feb 8 23:28:44.722428 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:28:44.719040 ignition[599]: no config URL provided Feb 8 23:28:44.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:44.726122 systemd[1]: Starting ignition-fetch.service... Feb 8 23:28:44.719054 ignition[599]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:28:44.719075 ignition[599]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:28:44.719087 ignition[599]: failed to fetch config: resource requires networking Feb 8 23:28:44.720203 ignition[599]: Ignition finished successfully Feb 8 23:28:44.745696 ignition[657]: Ignition 2.14.0 Feb 8 23:28:44.745724 ignition[657]: Stage: fetch Feb 8 23:28:44.745968 ignition[657]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:44.746011 ignition[657]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:44.748356 ignition[657]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:44.748578 ignition[657]: parsed url from cmdline: "" Feb 8 23:28:44.748588 ignition[657]: no config URL provided Feb 8 23:28:44.748601 ignition[657]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:28:44.748619 ignition[657]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:28:44.751853 ignition[657]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 8 23:28:44.751905 ignition[657]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 8 23:28:44.754730 ignition[657]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 8 23:28:44.970103 ignition[657]: GET result: OK Feb 8 23:28:44.970403 ignition[657]: parsing config with SHA512: 0780bbf91129636743367d969ebac2c5b92a95b7f22913b8afc46602d812f219af86b314b8fef5280391db427e2e40b211563cb10af3bc1cfa28e7589b4fb533 Feb 8 23:28:45.049277 unknown[657]: fetched base config from "system" Feb 8 23:28:45.050820 unknown[657]: fetched base config from "system" Feb 8 23:28:45.052155 unknown[657]: fetched user config from "openstack" Feb 8 23:28:45.055122 ignition[657]: fetch: fetch complete Feb 8 23:28:45.056341 ignition[657]: fetch: fetch passed Feb 8 23:28:45.056460 ignition[657]: Ignition finished successfully Feb 8 23:28:45.061546 systemd[1]: Finished ignition-fetch.service. Feb 8 23:28:45.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.064526 systemd[1]: Starting ignition-kargs.service... Feb 8 23:28:45.087663 ignition[663]: Ignition 2.14.0 Feb 8 23:28:45.087689 ignition[663]: Stage: kargs Feb 8 23:28:45.087938 ignition[663]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:45.087980 ignition[663]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:45.090357 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:45.094266 ignition[663]: kargs: kargs passed Feb 8 23:28:45.104289 systemd[1]: Finished ignition-kargs.service. Feb 8 23:28:45.094403 ignition[663]: Ignition finished successfully Feb 8 23:28:45.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.107455 systemd[1]: Starting ignition-disks.service... Feb 8 23:28:45.117348 ignition[668]: Ignition 2.14.0 Feb 8 23:28:45.117365 ignition[668]: Stage: disks Feb 8 23:28:45.117490 ignition[668]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:45.117514 ignition[668]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:45.118572 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:45.120141 ignition[668]: disks: disks passed Feb 8 23:28:45.122696 systemd[1]: Finished ignition-disks.service. Feb 8 23:28:45.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.120196 ignition[668]: Ignition finished successfully Feb 8 23:28:45.124113 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:28:45.125269 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:28:45.126435 systemd[1]: Reached target local-fs.target. Feb 8 23:28:45.127404 systemd[1]: Reached target sysinit.target. Feb 8 23:28:45.128349 systemd[1]: Reached target basic.target. Feb 8 23:28:45.130084 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:28:45.192898 systemd-fsck[676]: ROOT: clean, 602/1628000 files, 124051/1617920 blocks Feb 8 23:28:45.204778 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:28:45.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.207757 systemd[1]: Mounting sysroot.mount... Feb 8 23:28:45.328300 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:28:45.329592 systemd[1]: Mounted sysroot.mount. Feb 8 23:28:45.332558 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:28:45.338156 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:28:45.342065 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 8 23:28:45.346293 systemd[1]: Starting flatcar-openstack-hostname.service... Feb 8 23:28:45.349314 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:28:45.350456 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:28:45.356832 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:28:45.365061 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:28:45.368513 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:28:45.380133 initrd-setup-root[688]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:28:45.389244 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (683) Feb 8 23:28:45.394756 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:28:45.394781 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:28:45.394793 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:28:45.404022 initrd-setup-root[712]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:28:45.417757 initrd-setup-root[722]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:28:45.421939 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:28:45.430625 initrd-setup-root[730]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:28:45.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.517012 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:28:45.521006 systemd[1]: Starting ignition-mount.service... Feb 8 23:28:45.524574 systemd[1]: Starting sysroot-boot.service... Feb 8 23:28:45.542205 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:28:45.542470 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:28:45.571458 ignition[751]: INFO : Ignition 2.14.0 Feb 8 23:28:45.571458 ignition[751]: INFO : Stage: mount Feb 8 23:28:45.571458 ignition[751]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:45.571458 ignition[751]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:45.574477 ignition[751]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:45.574477 ignition[751]: INFO : mount: mount passed Feb 8 23:28:45.574477 ignition[751]: INFO : Ignition finished successfully Feb 8 23:28:45.576365 coreos-metadata[682]: Feb 08 23:28:45.575 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 8 23:28:45.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.580012 systemd[1]: Finished ignition-mount.service. Feb 8 23:28:45.589756 systemd[1]: Finished sysroot-boot.service. Feb 8 23:28:45.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.592274 coreos-metadata[682]: Feb 08 23:28:45.592 INFO Fetch successful Feb 8 23:28:45.592274 coreos-metadata[682]: Feb 08 23:28:45.592 INFO wrote hostname ci-3510-3-2-9-f62ee4a992.novalocal to /sysroot/etc/hostname Feb 8 23:28:45.597831 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 8 23:28:45.597940 systemd[1]: Finished flatcar-openstack-hostname.service. Feb 8 23:28:45.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:45.600206 systemd[1]: Starting ignition-files.service... Feb 8 23:28:45.608813 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:28:45.619251 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (760) Feb 8 23:28:45.622301 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:28:45.622321 kernel: BTRFS info (device vda6): using free space tree Feb 8 23:28:45.622334 kernel: BTRFS info (device vda6): has skinny extents Feb 8 23:28:45.631254 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:28:45.642777 ignition[779]: INFO : Ignition 2.14.0 Feb 8 23:28:45.643671 ignition[779]: INFO : Stage: files Feb 8 23:28:45.644319 ignition[779]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:45.645054 ignition[779]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:45.647355 ignition[779]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:45.650403 ignition[779]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:28:45.653133 ignition[779]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:28:45.653869 ignition[779]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:28:45.659835 ignition[779]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:28:45.660600 ignition[779]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:28:45.661335 ignition[779]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:28:45.661202 unknown[779]: wrote ssh authorized keys file for user: core Feb 8 23:28:45.662727 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:28:45.662727 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:45.726475 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:28:46.063539 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:28:46.063539 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:28:46.068083 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:46.170929 systemd-networkd[633]: eth0: Gained IPv6LL Feb 8 23:28:46.443775 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:28:46.927402 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:28:46.927402 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:28:46.932952 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:28:46.932952 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:28:47.274188 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:28:48.163434 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:28:48.165257 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:28:48.171552 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:28:48.171552 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:28:48.171552 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:28:48.171552 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:28:48.311842 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:28:49.286149 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 8 23:28:49.286149 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:28:49.291929 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:28:49.291929 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:28:49.396489 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:28:51.662434 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 8 23:28:51.662434 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:28:51.662434 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:28:51.662434 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:28:51.772107 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 8 23:28:52.675214 ignition[779]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 8 23:28:52.675214 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:28:52.681168 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:28:52.681168 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:28:53.022309 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:28:53.466001 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:28:53.466001 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:28:53.466001 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:28:53.466001 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:28:53.475536 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:28:53.475536 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:28:53.475536 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:28:53.475536 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:28:53.475536 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:28:53.552914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:28:53.552914 ignition[779]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:28:53.556941 ignition[779]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:28:53.561637 ignition[779]: INFO : files: op(10): op(11): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata-sshkeys@.service.d/20-clct-provider-override.conf" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(12): op(13): [started] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "20-clct-provider-override.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/20-clct-provider-override.conf" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 8 23:28:53.564510 ignition[779]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:28:53.607653 kernel: kauditd_printk_skb: 27 callbacks suppressed Feb 8 23:28:53.607689 kernel: audit: type=1130 audit(1707434933.579:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.574851 systemd[1]: Finished ignition-files.service. Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:28:53.608881 ignition[779]: INFO : files: createResultFile: createFiles: op(1e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:28:53.608881 ignition[779]: INFO : files: createResultFile: createFiles: op(1e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:28:53.608881 ignition[779]: INFO : files: files passed Feb 8 23:28:53.608881 ignition[779]: INFO : Ignition finished successfully Feb 8 23:28:53.584691 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:28:53.589186 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:28:53.590492 systemd[1]: Starting ignition-quench.service... Feb 8 23:28:53.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.637930 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:28:53.653690 kernel: audit: type=1130 audit(1707434933.638:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.653726 kernel: audit: type=1131 audit(1707434933.638:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.638088 systemd[1]: Finished ignition-quench.service. Feb 8 23:28:53.668907 initrd-setup-root-after-ignition[804]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:28:53.670277 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:28:53.679170 kernel: audit: type=1130 audit(1707434933.671:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.672461 systemd[1]: Reached target ignition-complete.target. Feb 8 23:28:53.682191 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:28:53.712157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:28:53.712437 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:28:53.742862 kernel: audit: type=1130 audit(1707434933.725:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.742915 kernel: audit: type=1131 audit(1707434933.725:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.725605 systemd[1]: Reached target initrd-fs.target. Feb 8 23:28:53.743604 systemd[1]: Reached target initrd.target. Feb 8 23:28:53.745547 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:28:53.746849 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:28:53.768416 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:28:53.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.770951 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:28:53.782940 kernel: audit: type=1130 audit(1707434933.769:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.792309 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:28:53.793671 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:28:53.795790 systemd[1]: Stopped target timers.target. Feb 8 23:28:53.797677 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:28:53.808108 kernel: audit: type=1131 audit(1707434933.799:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.797996 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:28:53.799788 systemd[1]: Stopped target initrd.target. Feb 8 23:28:53.809429 systemd[1]: Stopped target basic.target. Feb 8 23:28:53.810804 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:28:53.812065 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:28:53.813361 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:28:53.814723 systemd[1]: Stopped target remote-fs.target. Feb 8 23:28:53.817083 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:28:53.818401 systemd[1]: Stopped target sysinit.target. Feb 8 23:28:53.819791 systemd[1]: Stopped target local-fs.target. Feb 8 23:28:53.821099 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:28:53.822402 systemd[1]: Stopped target swap.target. Feb 8 23:28:53.823688 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:28:53.831808 kernel: audit: type=1131 audit(1707434933.824:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.823931 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:28:53.825128 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:28:53.840410 kernel: audit: type=1131 audit(1707434933.833:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.832419 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:28:53.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.832596 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:28:53.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.833783 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:28:53.833934 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:28:53.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.854905 iscsid[638]: iscsid shutting down. Feb 8 23:28:53.841009 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:28:53.841148 systemd[1]: Stopped ignition-files.service. Feb 8 23:28:53.843492 systemd[1]: Stopping ignition-mount.service... Feb 8 23:28:53.849331 systemd[1]: Stopping iscsid.service... Feb 8 23:28:53.850824 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:28:53.851306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:28:53.851650 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:28:53.852495 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:28:53.852719 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:28:53.855989 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:28:53.856135 systemd[1]: Stopped iscsid.service. Feb 8 23:28:53.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.867164 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:28:53.867975 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:28:53.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.872695 ignition[817]: INFO : Ignition 2.14.0 Feb 8 23:28:53.872695 ignition[817]: INFO : Stage: umount Feb 8 23:28:53.872695 ignition[817]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:28:53.872695 ignition[817]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Feb 8 23:28:53.874590 systemd[1]: Stopping iscsiuio.service... Feb 8 23:28:53.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.877650 ignition[817]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 8 23:28:53.876042 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:28:53.876142 systemd[1]: Stopped iscsiuio.service. Feb 8 23:28:53.884083 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:28:53.887056 ignition[817]: INFO : umount: umount passed Feb 8 23:28:53.887056 ignition[817]: INFO : Ignition finished successfully Feb 8 23:28:53.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.886488 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:28:53.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.886586 systemd[1]: Stopped ignition-mount.service. Feb 8 23:28:53.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.888059 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:28:53.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.888149 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:28:53.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.889471 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:28:53.889527 systemd[1]: Stopped ignition-disks.service. Feb 8 23:28:53.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.890342 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:28:53.890379 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:28:53.891175 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:28:53.891234 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:28:53.892167 systemd[1]: Stopped target network.target. Feb 8 23:28:53.893145 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:28:53.893184 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:28:53.894078 systemd[1]: Stopped target paths.target. Feb 8 23:28:53.894958 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:28:53.898258 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:28:53.899150 systemd[1]: Stopped target slices.target. Feb 8 23:28:53.900102 systemd[1]: Stopped target sockets.target. Feb 8 23:28:53.901207 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:28:53.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.901265 systemd[1]: Closed iscsid.socket. Feb 8 23:28:53.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.902114 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:28:53.902144 systemd[1]: Closed iscsiuio.socket. Feb 8 23:28:53.902988 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:28:53.903049 systemd[1]: Stopped ignition-setup.service. Feb 8 23:28:53.903954 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:28:53.903992 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:28:53.904952 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:28:53.906135 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:28:53.910431 systemd-networkd[633]: eth0: DHCPv6 lease lost Feb 8 23:28:53.912535 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:28:53.912641 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:28:53.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.914545 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:28:53.914644 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:28:53.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.916000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:28:53.916878 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:28:53.916924 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:28:53.918000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:28:53.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.918727 systemd[1]: Stopping network-cleanup.service... Feb 8 23:28:53.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.919168 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:28:53.919254 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:28:53.919761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:28:53.919807 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:28:53.920415 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:28:53.920451 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:28:53.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.921020 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:28:53.929191 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:28:53.929905 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:28:53.930056 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:28:53.931705 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:28:53.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.931778 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:28:53.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.933430 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:28:53.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.933489 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:28:53.936486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:28:53.936581 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:28:53.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.937419 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:28:53.937486 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:28:53.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.938539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:28:53.938609 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:28:53.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:53.940633 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:28:53.943554 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:28:53.943619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:28:53.944452 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:28:53.944496 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:28:53.945235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:28:53.945274 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:28:53.946965 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:28:53.948270 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:28:53.948920 systemd[1]: Stopped network-cleanup.service. Feb 8 23:28:53.950298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:28:53.950385 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:28:53.951643 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:28:53.953880 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:28:53.973983 systemd[1]: Switching root. Feb 8 23:28:53.994197 systemd-journald[184]: Journal stopped Feb 8 23:28:59.643179 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 8 23:28:59.643452 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:28:59.643494 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:28:59.643534 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:28:59.643565 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:28:59.643594 kernel: SELinux: policy capability open_perms=1 Feb 8 23:28:59.643635 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:28:59.643665 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:28:59.643694 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:28:59.643730 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:28:59.643759 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:28:59.643787 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:28:59.643819 systemd[1]: Successfully loaded SELinux policy in 162.258ms. Feb 8 23:28:59.643872 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.526ms. Feb 8 23:28:59.643911 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:28:59.643944 systemd[1]: Detected virtualization kvm. Feb 8 23:28:59.643975 systemd[1]: Detected architecture x86-64. Feb 8 23:28:59.644013 systemd[1]: Detected first boot. Feb 8 23:28:59.644045 systemd[1]: Hostname set to . Feb 8 23:28:59.644079 systemd[1]: Initializing machine ID from VM UUID. Feb 8 23:28:59.644110 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:28:59.644141 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:28:59.644173 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:28:59.644299 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:28:59.644367 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:28:59.644383 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 8 23:28:59.644398 kernel: audit: type=1334 audit(1707434939.112:90): prog-id=12 op=LOAD Feb 8 23:28:59.644409 kernel: audit: type=1334 audit(1707434939.112:91): prog-id=3 op=UNLOAD Feb 8 23:28:59.644421 kernel: audit: type=1334 audit(1707434939.112:92): prog-id=13 op=LOAD Feb 8 23:28:59.644432 kernel: audit: type=1334 audit(1707434939.112:93): prog-id=14 op=LOAD Feb 8 23:28:59.644444 kernel: audit: type=1334 audit(1707434939.112:94): prog-id=4 op=UNLOAD Feb 8 23:28:59.644456 kernel: audit: type=1334 audit(1707434939.112:95): prog-id=5 op=UNLOAD Feb 8 23:28:59.644467 kernel: audit: type=1334 audit(1707434939.115:96): prog-id=15 op=LOAD Feb 8 23:28:59.644478 kernel: audit: type=1334 audit(1707434939.115:97): prog-id=12 op=UNLOAD Feb 8 23:28:59.644488 kernel: audit: type=1334 audit(1707434939.117:98): prog-id=16 op=LOAD Feb 8 23:28:59.644499 kernel: audit: type=1334 audit(1707434939.118:99): prog-id=17 op=LOAD Feb 8 23:28:59.644510 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:28:59.644523 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:28:59.644535 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:28:59.644548 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:28:59.644562 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:28:59.644575 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 8 23:28:59.644587 systemd[1]: Created slice system-getty.slice. Feb 8 23:28:59.644598 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:28:59.644611 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:28:59.644623 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:28:59.644637 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:28:59.644650 systemd[1]: Created slice user.slice. Feb 8 23:28:59.644661 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:28:59.644674 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:28:59.644685 systemd[1]: Set up automount boot.automount. Feb 8 23:28:59.644697 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:28:59.644709 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:28:59.644721 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:28:59.644733 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:28:59.644747 systemd[1]: Reached target integritysetup.target. Feb 8 23:28:59.644758 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:28:59.644770 systemd[1]: Reached target remote-fs.target. Feb 8 23:28:59.644781 systemd[1]: Reached target slices.target. Feb 8 23:28:59.644793 systemd[1]: Reached target swap.target. Feb 8 23:28:59.644805 systemd[1]: Reached target torcx.target. Feb 8 23:28:59.644816 systemd[1]: Reached target veritysetup.target. Feb 8 23:28:59.644828 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:28:59.644840 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:28:59.644852 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:28:59.644866 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:28:59.644877 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:28:59.644889 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:28:59.644900 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:28:59.644912 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:28:59.644923 systemd[1]: Mounting media.mount... Feb 8 23:28:59.644941 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:28:59.644953 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:28:59.644964 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:28:59.644977 systemd[1]: Mounting tmp.mount... Feb 8 23:28:59.644989 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:28:59.645001 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:28:59.645013 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:28:59.645024 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:28:59.645035 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:28:59.645046 systemd[1]: Starting modprobe@drm.service... Feb 8 23:28:59.645058 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:28:59.645069 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:28:59.645082 systemd[1]: Starting modprobe@loop.service... Feb 8 23:28:59.645095 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:28:59.645106 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:28:59.645118 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:28:59.645129 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:28:59.645140 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:28:59.645152 systemd[1]: Stopped systemd-journald.service. Feb 8 23:28:59.645164 systemd[1]: Starting systemd-journald.service... Feb 8 23:28:59.645175 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:28:59.645188 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:28:59.645200 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:28:59.645212 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:28:59.645257 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:28:59.645272 systemd[1]: Stopped verity-setup.service. Feb 8 23:28:59.645285 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:28:59.645297 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:28:59.645309 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:28:59.645321 systemd[1]: Mounted media.mount. Feb 8 23:28:59.645335 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:28:59.645346 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:28:59.645358 systemd[1]: Mounted tmp.mount. Feb 8 23:28:59.645369 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:28:59.645381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:28:59.645393 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:28:59.645404 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:28:59.645416 systemd[1]: Finished modprobe@drm.service. Feb 8 23:28:59.645428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:28:59.645441 kernel: loop: module loaded Feb 8 23:28:59.645452 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:28:59.645463 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:28:59.645475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:28:59.645486 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:28:59.645499 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:28:59.645511 systemd[1]: Finished modprobe@loop.service. Feb 8 23:28:59.645523 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:28:59.645535 systemd[1]: Reached target network-pre.target. Feb 8 23:28:59.645548 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:28:59.645560 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:28:59.645576 systemd-journald[913]: Journal started Feb 8 23:28:59.645634 systemd-journald[913]: Runtime Journal (/run/log/journal/f64988f81ad645c9a7e9a06b750df4b7) is 4.9M, max 39.5M, 34.5M free. Feb 8 23:28:54.414000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:28:54.502000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:28:54.502000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:28:54.502000 audit: BPF prog-id=10 op=LOAD Feb 8 23:28:54.502000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:28:54.503000 audit: BPF prog-id=11 op=LOAD Feb 8 23:28:54.503000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:28:54.645000 audit[850]: AVC avc: denied { associate } for pid=850 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:28:54.645000 audit[850]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=833 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:54.645000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:28:54.647000 audit[850]: AVC avc: denied { associate } for pid=850 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:28:54.647000 audit[850]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=833 pid=850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:54.647000 audit: CWD cwd="/" Feb 8 23:28:54.647000 audit: PATH item=0 name=(null) inode=2 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:54.647000 audit: PATH item=1 name=(null) inode=3 dev=00:1a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:28:54.647000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:28:59.112000 audit: BPF prog-id=12 op=LOAD Feb 8 23:28:59.112000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:28:59.112000 audit: BPF prog-id=13 op=LOAD Feb 8 23:28:59.112000 audit: BPF prog-id=14 op=LOAD Feb 8 23:28:59.112000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:28:59.112000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:28:59.115000 audit: BPF prog-id=15 op=LOAD Feb 8 23:28:59.115000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:28:59.117000 audit: BPF prog-id=16 op=LOAD Feb 8 23:28:59.118000 audit: BPF prog-id=17 op=LOAD Feb 8 23:28:59.118000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:28:59.118000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:28:59.121000 audit: BPF prog-id=18 op=LOAD Feb 8 23:28:59.121000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:28:59.121000 audit: BPF prog-id=19 op=LOAD Feb 8 23:28:59.125000 audit: BPF prog-id=20 op=LOAD Feb 8 23:28:59.125000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:28:59.125000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:28:59.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.132000 audit: BPF prog-id=18 op=UNLOAD Feb 8 23:28:59.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.443000 audit: BPF prog-id=21 op=LOAD Feb 8 23:28:59.444000 audit: BPF prog-id=22 op=LOAD Feb 8 23:28:59.445000 audit: BPF prog-id=23 op=LOAD Feb 8 23:28:59.445000 audit: BPF prog-id=19 op=UNLOAD Feb 8 23:28:59.445000 audit: BPF prog-id=20 op=UNLOAD Feb 8 23:28:59.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.639000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:28:59.639000 audit[913]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffce8751b50 a2=4000 a3=7ffce8751bec items=0 ppid=1 pid=913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:28:59.639000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:28:54.641966 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:28:59.110187 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:28:54.642989 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:28:59.110201 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 8 23:28:54.643013 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:28:59.126610 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:28:54.643049 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:28:54.643061 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:28:54.643096 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:28:54.643111 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:28:54.643389 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:28:54.643431 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:28:54.643446 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:28:54.644326 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:28:54.644367 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:28:54.644389 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:28:54.644407 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:28:54.644427 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:28:54.644443 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:54Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:28:58.547550 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:28:58.548771 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:28:58.549004 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:28:58.549394 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:28:58.549502 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:28:58.549629 /usr/lib/systemd/system-generators/torcx-generator[850]: time="2024-02-08T23:28:58Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:28:59.680584 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:28:59.680641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:28:59.682487 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:28:59.687036 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:28:59.687074 systemd[1]: Started systemd-journald.service. Feb 8 23:28:59.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.687925 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:28:59.690358 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:28:59.706579 kernel: fuse: init (API version 7.34) Feb 8 23:28:59.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.704135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:28:59.704314 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:28:59.706005 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:28:59.712624 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:28:59.740740 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:28:59.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.743023 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:28:59.750433 systemd-journald[913]: Time spent on flushing to /var/log/journal/f64988f81ad645c9a7e9a06b750df4b7 is 33.670ms for 1155 entries. Feb 8 23:28:59.750433 systemd-journald[913]: System Journal (/var/log/journal/f64988f81ad645c9a7e9a06b750df4b7) is 8.0M, max 584.8M, 576.8M free. Feb 8 23:28:59.926811 systemd-journald[913]: Received client request to flush runtime journal. Feb 8 23:28:59.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.777717 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:28:59.929811 udevadm[957]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:28:59.779833 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:28:59.784103 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:28:59.786196 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:28:59.880362 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:28:59.881008 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:28:59.893516 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:28:59.922565 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:28:59.924567 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:28:59.928360 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:28:59.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:28:59.967364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:29:00.526393 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:29:00.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.528000 audit: BPF prog-id=24 op=LOAD Feb 8 23:29:00.529000 audit: BPF prog-id=25 op=LOAD Feb 8 23:29:00.529000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:29:00.529000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:29:00.532034 systemd[1]: Starting systemd-udevd.service... Feb 8 23:29:00.571626 systemd-udevd[964]: Using default interface naming scheme 'v252'. Feb 8 23:29:00.626534 systemd[1]: Started systemd-udevd.service. Feb 8 23:29:00.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.633000 audit: BPF prog-id=26 op=LOAD Feb 8 23:29:00.637031 systemd[1]: Starting systemd-networkd.service... Feb 8 23:29:00.656000 audit: BPF prog-id=27 op=LOAD Feb 8 23:29:00.657000 audit: BPF prog-id=28 op=LOAD Feb 8 23:29:00.657000 audit: BPF prog-id=29 op=LOAD Feb 8 23:29:00.659211 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:29:00.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.707143 systemd[1]: Started systemd-userdbd.service. Feb 8 23:29:00.718764 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:29:00.788754 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:29:00.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.810664 systemd-networkd[976]: lo: Link UP Feb 8 23:29:00.810674 systemd-networkd[976]: lo: Gained carrier Feb 8 23:29:00.811174 systemd-networkd[976]: Enumeration completed Feb 8 23:29:00.811365 systemd[1]: Started systemd-networkd.service. Feb 8 23:29:00.813372 systemd-networkd[976]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:29:00.815626 systemd-networkd[976]: eth0: Link UP Feb 8 23:29:00.815796 systemd-networkd[976]: eth0: Gained carrier Feb 8 23:29:00.820285 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 8 23:29:00.827243 kernel: ACPI: button: Power Button [PWRF] Feb 8 23:29:00.832375 systemd-networkd[976]: eth0: DHCPv4 address 172.24.4.155/24, gateway 172.24.4.1 acquired from 172.24.4.1 Feb 8 23:29:00.836000 audit[965]: AVC avc: denied { confidentiality } for pid=965 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:29:00.836000 audit[965]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55da11ebab60 a1=32194 a2=7f15b99c2bc5 a3=5 items=108 ppid=964 pid=965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:29:00.836000 audit: CWD cwd="/" Feb 8 23:29:00.836000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=1 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=2 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=3 name=(null) inode=12081 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=4 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=5 name=(null) inode=12082 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=6 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=7 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=8 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=9 name=(null) inode=12084 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=10 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=11 name=(null) inode=12085 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=12 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=13 name=(null) inode=12086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=14 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=15 name=(null) inode=12087 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=16 name=(null) inode=12083 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=17 name=(null) inode=12088 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=18 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=19 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=20 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=21 name=(null) inode=12090 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=22 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=23 name=(null) inode=12091 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=24 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=25 name=(null) inode=12092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=26 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=27 name=(null) inode=12093 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=28 name=(null) inode=12089 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=29 name=(null) inode=12094 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=30 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=31 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=32 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=33 name=(null) inode=12096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=34 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=35 name=(null) inode=12097 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=36 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=37 name=(null) inode=12098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=38 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=39 name=(null) inode=12099 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=40 name=(null) inode=12095 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=41 name=(null) inode=12100 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=42 name=(null) inode=12080 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=43 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=44 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=45 name=(null) inode=12102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=46 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=47 name=(null) inode=12103 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=48 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=49 name=(null) inode=12104 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=50 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=51 name=(null) inode=12105 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=52 name=(null) inode=12101 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=53 name=(null) inode=12106 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=55 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=56 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=57 name=(null) inode=12108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=58 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=59 name=(null) inode=12109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=60 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=61 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=62 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=63 name=(null) inode=12111 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=64 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=65 name=(null) inode=12112 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=66 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=67 name=(null) inode=12113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=68 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=69 name=(null) inode=12114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=70 name=(null) inode=12110 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=71 name=(null) inode=12115 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=72 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=73 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=74 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=75 name=(null) inode=12117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=76 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=77 name=(null) inode=12118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=78 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=79 name=(null) inode=12119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=80 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=81 name=(null) inode=12120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=82 name=(null) inode=12116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=83 name=(null) inode=12121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=84 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=85 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=86 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=87 name=(null) inode=12123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=88 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=89 name=(null) inode=12124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=90 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=91 name=(null) inode=12125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=92 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=93 name=(null) inode=12126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=94 name=(null) inode=12122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=95 name=(null) inode=12127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=96 name=(null) inode=12107 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=97 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=98 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=99 name=(null) inode=12129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=100 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=101 name=(null) inode=12130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=102 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=103 name=(null) inode=12131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=104 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=105 name=(null) inode=12132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=106 name=(null) inode=12128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PATH item=107 name=(null) inode=12133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:29:00.836000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:29:00.849259 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 8 23:29:00.877531 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 8 23:29:00.882250 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:29:00.928715 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:29:00.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.930593 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:29:00.958599 lvm[993]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:29:00.985722 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:29:00.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:00.987161 systemd[1]: Reached target cryptsetup.target. Feb 8 23:29:00.990920 systemd[1]: Starting lvm2-activation.service... Feb 8 23:29:00.995033 lvm[994]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:29:01.019651 systemd[1]: Finished lvm2-activation.service. Feb 8 23:29:01.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:01.021170 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:29:01.022437 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:29:01.022508 systemd[1]: Reached target local-fs.target. Feb 8 23:29:01.023699 systemd[1]: Reached target machines.target. Feb 8 23:29:01.027965 systemd[1]: Starting ldconfig.service... Feb 8 23:29:01.031410 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:29:01.031519 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:29:01.033956 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:29:01.037544 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:29:01.045478 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:29:01.047079 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:29:01.047197 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:29:01.052081 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:29:01.076814 systemd[1]: boot.automount: Got automount request for /boot, triggered by 996 (bootctl) Feb 8 23:29:01.079860 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:29:01.135537 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:29:01.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:01.146346 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:29:01.161593 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:29:01.163327 systemd-tmpfiles[999]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:29:01.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:01.843977 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:29:01.845542 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:29:02.036679 systemd-fsck[1005]: fsck.fat 4.2 (2021-01-31) Feb 8 23:29:02.036679 systemd-fsck[1005]: /dev/vda1: 789 files, 115332/258078 clusters Feb 8 23:29:02.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.042358 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:29:02.047069 systemd[1]: Mounting boot.mount... Feb 8 23:29:02.072143 systemd[1]: Mounted boot.mount. Feb 8 23:29:02.112487 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:29:02.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.190065 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:29:02.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.192425 systemd[1]: Starting audit-rules.service... Feb 8 23:29:02.194141 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:29:02.197945 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:29:02.203000 audit: BPF prog-id=30 op=LOAD Feb 8 23:29:02.205790 systemd[1]: Starting systemd-resolved.service... Feb 8 23:29:02.207000 audit: BPF prog-id=31 op=LOAD Feb 8 23:29:02.210722 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:29:02.213771 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:29:02.220978 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:29:02.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.221670 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:29:02.238000 audit[1014]: SYSTEM_BOOT pid=1014 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.240086 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:29:02.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.281778 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:29:02.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:29:02.317091 augenrules[1028]: No rules Feb 8 23:29:02.316000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:29:02.316000 audit[1028]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc8ee99770 a2=420 a3=0 items=0 ppid=1008 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:29:02.316000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:29:02.318172 systemd[1]: Finished audit-rules.service. Feb 8 23:29:02.326814 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:29:02.327575 systemd[1]: Reached target time-set.target. Feb 8 23:29:02.347745 systemd-timesyncd[1012]: Contacted time server 45.128.41.10:123 (0.flatcar.pool.ntp.org). Feb 8 23:29:02.348152 systemd-timesyncd[1012]: Initial clock synchronization to Thu 2024-02-08 23:29:02.337011 UTC. Feb 8 23:29:02.348655 systemd-resolved[1011]: Positive Trust Anchors: Feb 8 23:29:02.348679 systemd-resolved[1011]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:29:02.348738 systemd-resolved[1011]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:29:02.369886 systemd-resolved[1011]: Using system hostname 'ci-3510-3-2-9-f62ee4a992.novalocal'. Feb 8 23:29:02.373434 systemd[1]: Started systemd-resolved.service. Feb 8 23:29:02.374106 systemd[1]: Reached target network.target. Feb 8 23:29:02.374637 systemd[1]: Reached target nss-lookup.target. Feb 8 23:29:02.426567 systemd-networkd[976]: eth0: Gained IPv6LL Feb 8 23:29:02.538125 ldconfig[995]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:29:02.558734 systemd[1]: Finished ldconfig.service. Feb 8 23:29:02.561612 systemd[1]: Starting systemd-update-done.service... Feb 8 23:29:02.573891 systemd[1]: Finished systemd-update-done.service. Feb 8 23:29:02.578533 systemd[1]: Reached target sysinit.target. Feb 8 23:29:02.579140 systemd[1]: Started motdgen.path. Feb 8 23:29:02.579699 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:29:02.580436 systemd[1]: Started logrotate.timer. Feb 8 23:29:02.580981 systemd[1]: Started mdadm.timer. Feb 8 23:29:02.581471 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:29:02.581969 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:29:02.582004 systemd[1]: Reached target paths.target. Feb 8 23:29:02.582603 systemd[1]: Reached target timers.target. Feb 8 23:29:02.583701 systemd[1]: Listening on dbus.socket. Feb 8 23:29:02.585574 systemd[1]: Starting docker.socket... Feb 8 23:29:02.590474 systemd[1]: Listening on sshd.socket. Feb 8 23:29:02.591262 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:29:02.591825 systemd[1]: Listening on docker.socket. Feb 8 23:29:02.592495 systemd[1]: Reached target sockets.target. Feb 8 23:29:02.593077 systemd[1]: Reached target basic.target. Feb 8 23:29:02.593739 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:29:02.593858 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:29:02.595104 systemd[1]: Starting containerd.service... Feb 8 23:29:02.597026 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 8 23:29:02.601012 systemd[1]: Starting dbus.service... Feb 8 23:29:02.603888 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:29:02.610864 systemd[1]: Starting extend-filesystems.service... Feb 8 23:29:02.622789 jq[1042]: false Feb 8 23:29:02.612707 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:29:02.615192 systemd[1]: Starting motdgen.service... Feb 8 23:29:02.618333 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:29:02.623153 systemd[1]: Starting prepare-critools.service... Feb 8 23:29:02.627013 systemd[1]: Starting prepare-helm.service... Feb 8 23:29:02.632550 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:29:02.637758 systemd[1]: Starting sshd-keygen.service... Feb 8 23:29:02.651562 systemd[1]: Starting systemd-logind.service... Feb 8 23:29:02.652909 extend-filesystems[1043]: Found vda Feb 8 23:29:02.653042 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:29:02.653214 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:29:02.654280 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:29:02.655838 extend-filesystems[1043]: Found vda1 Feb 8 23:29:02.657103 extend-filesystems[1043]: Found vda2 Feb 8 23:29:02.658175 extend-filesystems[1043]: Found vda3 Feb 8 23:29:02.658533 systemd[1]: Starting update-engine.service... Feb 8 23:29:02.660744 extend-filesystems[1043]: Found usr Feb 8 23:29:02.662115 extend-filesystems[1043]: Found vda4 Feb 8 23:29:02.662115 extend-filesystems[1043]: Found vda6 Feb 8 23:29:02.662115 extend-filesystems[1043]: Found vda7 Feb 8 23:29:02.662115 extend-filesystems[1043]: Found vda9 Feb 8 23:29:02.692886 extend-filesystems[1043]: Checking size of /dev/vda9 Feb 8 23:29:02.663425 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:29:02.698551 jq[1061]: true Feb 8 23:29:02.667008 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:29:02.667195 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:29:02.707539 tar[1064]: crictl Feb 8 23:29:02.670641 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:29:02.670805 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:29:02.708119 tar[1063]: ./ Feb 8 23:29:02.708119 tar[1063]: ./loopback Feb 8 23:29:02.716674 tar[1065]: linux-amd64/helm Feb 8 23:29:02.717384 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:29:02.717778 jq[1074]: true Feb 8 23:29:02.717545 systemd[1]: Finished motdgen.service. Feb 8 23:29:02.743406 extend-filesystems[1043]: Resized partition /dev/vda9 Feb 8 23:29:02.752106 dbus-daemon[1039]: [system] SELinux support is enabled Feb 8 23:29:02.752296 systemd[1]: Started dbus.service. Feb 8 23:29:02.754854 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:29:02.754887 systemd[1]: Reached target system-config.target. Feb 8 23:29:02.755442 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:29:02.755464 systemd[1]: Reached target user-config.target. Feb 8 23:29:02.763688 extend-filesystems[1079]: resize2fs 1.46.5 (30-Dec-2021) Feb 8 23:29:02.801264 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 4635643 blocks Feb 8 23:29:02.869892 update_engine[1060]: I0208 23:29:02.868458 1060 main.cc:92] Flatcar Update Engine starting Feb 8 23:29:02.875323 systemd[1]: Started update-engine.service. Feb 8 23:29:02.938834 update_engine[1060]: I0208 23:29:02.875376 1060 update_check_scheduler.cc:74] Next update check in 4m6s Feb 8 23:29:02.878069 systemd[1]: Started locksmithd.service. Feb 8 23:29:02.931570 systemd-logind[1059]: Watching system buttons on /dev/input/event1 (Power Button) Feb 8 23:29:02.931593 systemd-logind[1059]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:29:02.936184 systemd-logind[1059]: New seat seat0. Feb 8 23:29:02.942606 systemd[1]: Started systemd-logind.service. Feb 8 23:29:02.950296 env[1068]: time="2024-02-08T23:29:02.947208344Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:29:02.965319 coreos-metadata[1038]: Feb 08 23:29:02.963 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 8 23:29:02.977248 kernel: EXT4-fs (vda9): resized filesystem to 4635643 Feb 8 23:29:03.106250 env[1068]: time="2024-02-08T23:29:02.992365646Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:29:03.106400 bash[1098]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:29:03.109762 extend-filesystems[1079]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 8 23:29:03.109762 extend-filesystems[1079]: old_desc_blocks = 1, new_desc_blocks = 3 Feb 8 23:29:03.109762 extend-filesystems[1079]: The filesystem on /dev/vda9 is now 4635643 (4k) blocks long. Feb 8 23:29:03.107016 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.094088650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.115950068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116025069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116472451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116511899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116554972Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116584227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.116736852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.123354950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127460 env[1068]: time="2024-02-08T23:29:03.123596513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:29:03.127725 tar[1063]: ./bandwidth Feb 8 23:29:03.127761 extend-filesystems[1043]: Resized filesystem in /dev/vda9 Feb 8 23:29:03.111613 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:29:03.128909 env[1068]: time="2024-02-08T23:29:03.123620953Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:29:03.128909 env[1068]: time="2024-02-08T23:29:03.123723397Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:29:03.128909 env[1068]: time="2024-02-08T23:29:03.123740228Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:29:03.111869 systemd[1]: Finished extend-filesystems.service. Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138728107Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138784416Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138802167Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138842916Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138864221Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138882273Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138898803Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138918728Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138937831Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138955703Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138972163Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.138992397Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.139157918Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:29:03.140243 env[1068]: time="2024-02-08T23:29:03.139294483Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139626125Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139658043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139676185Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139727848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139745060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139761529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139777078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139794909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139811750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139826858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139841406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.139859618Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.140021244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.140043521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.140720 env[1068]: time="2024-02-08T23:29:03.140058529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.141099 env[1068]: time="2024-02-08T23:29:03.140074949Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:29:03.141099 env[1068]: time="2024-02-08T23:29:03.140094873Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:29:03.141099 env[1068]: time="2024-02-08T23:29:03.140111333Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:29:03.141099 env[1068]: time="2024-02-08T23:29:03.140134531Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:29:03.141099 env[1068]: time="2024-02-08T23:29:03.140176522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:29:03.142911 systemd[1]: Started containerd.service. Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.141555653Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.141639184Z" level=info msg="Connect containerd service" Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.141685711Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.142363391Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.142663675Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.142710171Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.142764507Z" level=info msg="containerd successfully booted in 0.234687s" Feb 8 23:29:03.143326 env[1068]: time="2024-02-08T23:29:03.143102568Z" level=info msg="Start subscribing containerd event" Feb 8 23:29:03.147407 env[1068]: time="2024-02-08T23:29:03.144667373Z" level=info msg="Start recovering state" Feb 8 23:29:03.147407 env[1068]: time="2024-02-08T23:29:03.145388647Z" level=info msg="Start event monitor" Feb 8 23:29:03.147407 env[1068]: time="2024-02-08T23:29:03.145422788Z" level=info msg="Start snapshots syncer" Feb 8 23:29:03.147407 env[1068]: time="2024-02-08T23:29:03.145441862Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:29:03.147407 env[1068]: time="2024-02-08T23:29:03.145451954Z" level=info msg="Start streaming server" Feb 8 23:29:03.180569 coreos-metadata[1038]: Feb 08 23:29:03.180 INFO Fetch successful Feb 8 23:29:03.180757 coreos-metadata[1038]: Feb 08 23:29:03.180 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 8 23:29:03.190832 coreos-metadata[1038]: Feb 08 23:29:03.190 INFO Fetch successful Feb 8 23:29:03.197519 unknown[1038]: wrote ssh authorized keys file for user: core Feb 8 23:29:03.251107 tar[1063]: ./ptp Feb 8 23:29:03.295258 update-ssh-keys[1106]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:29:03.296312 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 8 23:29:03.349193 tar[1063]: ./vlan Feb 8 23:29:03.435052 tar[1063]: ./host-device Feb 8 23:29:03.518152 tar[1063]: ./tuning Feb 8 23:29:03.593223 tar[1063]: ./vrf Feb 8 23:29:03.673933 tar[1063]: ./sbr Feb 8 23:29:03.750875 tar[1063]: ./tap Feb 8 23:29:03.834323 tar[1063]: ./dhcp Feb 8 23:29:03.974921 tar[1065]: linux-amd64/LICENSE Feb 8 23:29:03.975399 tar[1065]: linux-amd64/README.md Feb 8 23:29:03.982984 systemd[1]: Finished prepare-helm.service. Feb 8 23:29:04.047172 systemd[1]: Finished prepare-critools.service. Feb 8 23:29:04.055426 tar[1063]: ./static Feb 8 23:29:04.084647 tar[1063]: ./firewall Feb 8 23:29:04.130965 tar[1063]: ./macvlan Feb 8 23:29:04.171315 tar[1063]: ./dummy Feb 8 23:29:04.211459 tar[1063]: ./bridge Feb 8 23:29:04.255923 tar[1063]: ./ipvlan Feb 8 23:29:04.296882 tar[1063]: ./portmap Feb 8 23:29:04.335480 tar[1063]: ./host-local Feb 8 23:29:04.383192 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:29:04.390207 locksmithd[1099]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:29:05.373880 sshd_keygen[1072]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:29:05.397160 systemd[1]: Finished sshd-keygen.service. Feb 8 23:29:05.399720 systemd[1]: Starting issuegen.service... Feb 8 23:29:05.405881 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:29:05.406054 systemd[1]: Finished issuegen.service. Feb 8 23:29:05.408577 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:29:05.415824 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:29:05.418555 systemd[1]: Started getty@tty1.service. Feb 8 23:29:05.420693 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:29:05.421596 systemd[1]: Reached target getty.target. Feb 8 23:29:05.422387 systemd[1]: Reached target multi-user.target. Feb 8 23:29:05.424490 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:29:05.435411 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:29:05.435733 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:29:05.436953 systemd[1]: Startup finished in 938ms (kernel) + 12.441s (initrd) + 11.271s (userspace) = 24.650s. Feb 8 23:29:12.832636 systemd[1]: Created slice system-sshd.slice. Feb 8 23:29:12.834823 systemd[1]: Started sshd@0-172.24.4.155:22-172.24.4.1:32780.service. Feb 8 23:29:13.950140 sshd[1130]: Accepted publickey for core from 172.24.4.1 port 32780 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:13.954704 sshd[1130]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:13.990492 systemd-logind[1059]: New session 1 of user core. Feb 8 23:29:13.994487 systemd[1]: Created slice user-500.slice. Feb 8 23:29:13.997146 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:29:14.037752 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:29:14.041567 systemd[1]: Starting user@500.service... Feb 8 23:29:14.049839 (systemd)[1133]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:14.204502 systemd[1133]: Queued start job for default target default.target. Feb 8 23:29:14.205130 systemd[1133]: Reached target paths.target. Feb 8 23:29:14.205151 systemd[1133]: Reached target sockets.target. Feb 8 23:29:14.205167 systemd[1133]: Reached target timers.target. Feb 8 23:29:14.205191 systemd[1133]: Reached target basic.target. Feb 8 23:29:14.205261 systemd[1133]: Reached target default.target. Feb 8 23:29:14.205290 systemd[1133]: Startup finished in 141ms. Feb 8 23:29:14.205960 systemd[1]: Started user@500.service. Feb 8 23:29:14.207107 systemd[1]: Started session-1.scope. Feb 8 23:29:14.595879 systemd[1]: Started sshd@1-172.24.4.155:22-172.24.4.1:35380.service. Feb 8 23:29:16.171182 sshd[1142]: Accepted publickey for core from 172.24.4.1 port 35380 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:16.174830 sshd[1142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:16.185836 systemd-logind[1059]: New session 2 of user core. Feb 8 23:29:16.187133 systemd[1]: Started session-2.scope. Feb 8 23:29:16.813271 sshd[1142]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:16.820759 systemd[1]: sshd@1-172.24.4.155:22-172.24.4.1:35380.service: Deactivated successfully. Feb 8 23:29:16.822148 systemd[1]: session-2.scope: Deactivated successfully. Feb 8 23:29:16.823858 systemd-logind[1059]: Session 2 logged out. Waiting for processes to exit. Feb 8 23:29:16.826513 systemd[1]: Started sshd@2-172.24.4.155:22-172.24.4.1:35384.service. Feb 8 23:29:16.829780 systemd-logind[1059]: Removed session 2. Feb 8 23:29:17.924338 sshd[1148]: Accepted publickey for core from 172.24.4.1 port 35384 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:17.928874 sshd[1148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:17.939551 systemd-logind[1059]: New session 3 of user core. Feb 8 23:29:17.940308 systemd[1]: Started session-3.scope. Feb 8 23:29:18.587103 sshd[1148]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:18.594127 systemd[1]: Started sshd@3-172.24.4.155:22-172.24.4.1:35400.service. Feb 8 23:29:18.598133 systemd[1]: sshd@2-172.24.4.155:22-172.24.4.1:35384.service: Deactivated successfully. Feb 8 23:29:18.599821 systemd[1]: session-3.scope: Deactivated successfully. Feb 8 23:29:18.602648 systemd-logind[1059]: Session 3 logged out. Waiting for processes to exit. Feb 8 23:29:18.605530 systemd-logind[1059]: Removed session 3. Feb 8 23:29:19.990346 sshd[1153]: Accepted publickey for core from 172.24.4.1 port 35400 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:19.992936 sshd[1153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:20.003520 systemd-logind[1059]: New session 4 of user core. Feb 8 23:29:20.004514 systemd[1]: Started session-4.scope. Feb 8 23:29:20.632467 sshd[1153]: pam_unix(sshd:session): session closed for user core Feb 8 23:29:20.641286 systemd[1]: Started sshd@4-172.24.4.155:22-172.24.4.1:35404.service. Feb 8 23:29:20.644143 systemd[1]: sshd@3-172.24.4.155:22-172.24.4.1:35400.service: Deactivated successfully. Feb 8 23:29:20.645756 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:29:20.649440 systemd-logind[1059]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:29:20.652208 systemd-logind[1059]: Removed session 4. Feb 8 23:29:22.010460 sshd[1159]: Accepted publickey for core from 172.24.4.1 port 35404 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:29:22.013447 sshd[1159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:29:22.027436 systemd-logind[1059]: New session 5 of user core. Feb 8 23:29:22.031539 systemd[1]: Started session-5.scope. Feb 8 23:29:22.456533 sudo[1163]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:29:22.457807 sudo[1163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:29:23.177436 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:29:23.188672 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:29:23.190845 systemd[1]: Reached target network-online.target. Feb 8 23:29:23.196547 systemd[1]: Starting docker.service... Feb 8 23:29:23.273997 env[1179]: time="2024-02-08T23:29:23.273931607Z" level=info msg="Starting up" Feb 8 23:29:23.278441 env[1179]: time="2024-02-08T23:29:23.278378175Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:29:23.278514 env[1179]: time="2024-02-08T23:29:23.278437787Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:29:23.278549 env[1179]: time="2024-02-08T23:29:23.278503278Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:29:23.278549 env[1179]: time="2024-02-08T23:29:23.278533128Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:29:23.282491 env[1179]: time="2024-02-08T23:29:23.282461559Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:29:23.282613 env[1179]: time="2024-02-08T23:29:23.282598481Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:29:23.282695 env[1179]: time="2024-02-08T23:29:23.282676403Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:29:23.282757 env[1179]: time="2024-02-08T23:29:23.282742394Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:29:23.295140 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1584418593-merged.mount: Deactivated successfully. Feb 8 23:29:23.449098 env[1179]: time="2024-02-08T23:29:23.448861451Z" level=info msg="Loading containers: start." Feb 8 23:29:23.721364 kernel: Initializing XFRM netlink socket Feb 8 23:29:23.817605 env[1179]: time="2024-02-08T23:29:23.817558200Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:29:23.934098 systemd-networkd[976]: docker0: Link UP Feb 8 23:29:23.954524 env[1179]: time="2024-02-08T23:29:23.954462717Z" level=info msg="Loading containers: done." Feb 8 23:29:23.969870 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1534435472-merged.mount: Deactivated successfully. Feb 8 23:29:23.978800 env[1179]: time="2024-02-08T23:29:23.978729701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:29:23.979039 env[1179]: time="2024-02-08T23:29:23.978998316Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:29:23.979142 env[1179]: time="2024-02-08T23:29:23.979113682Z" level=info msg="Daemon has completed initialization" Feb 8 23:29:24.019428 systemd[1]: Started docker.service. Feb 8 23:29:24.039481 env[1179]: time="2024-02-08T23:29:24.039017907Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:29:24.079263 systemd[1]: Reloading. Feb 8 23:29:24.224654 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-08T23:29:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:29:24.227338 /usr/lib/systemd/system-generators/torcx-generator[1317]: time="2024-02-08T23:29:24Z" level=info msg="torcx already run" Feb 8 23:29:24.304629 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:29:24.304966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:29:24.331038 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:29:24.413436 systemd[1]: Started kubelet.service. Feb 8 23:29:24.498085 kubelet[1362]: E0208 23:29:24.498027 1362 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:29:24.500301 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:29:24.500434 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:29:25.488982 env[1068]: time="2024-02-08T23:29:25.488766589Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 8 23:29:26.292447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764651540.mount: Deactivated successfully. Feb 8 23:29:29.689316 env[1068]: time="2024-02-08T23:29:29.689071378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:29.713426 env[1068]: time="2024-02-08T23:29:29.713342681Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:29.770263 env[1068]: time="2024-02-08T23:29:29.770040863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:29.821348 env[1068]: time="2024-02-08T23:29:29.821177491Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:29.827855 env[1068]: time="2024-02-08T23:29:29.827753069Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 8 23:29:29.865725 env[1068]: time="2024-02-08T23:29:29.865651673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 8 23:29:33.258508 env[1068]: time="2024-02-08T23:29:33.258433418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:33.261598 env[1068]: time="2024-02-08T23:29:33.261569710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:33.265288 env[1068]: time="2024-02-08T23:29:33.265254678Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:33.268957 env[1068]: time="2024-02-08T23:29:33.268920272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:33.272727 env[1068]: time="2024-02-08T23:29:33.272605451Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 8 23:29:33.289873 env[1068]: time="2024-02-08T23:29:33.289837931Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 8 23:29:34.517651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:29:34.518288 systemd[1]: Stopped kubelet.service. Feb 8 23:29:34.522592 systemd[1]: Started kubelet.service. Feb 8 23:29:34.635099 kubelet[1390]: E0208 23:29:34.635002 1390 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:29:34.643019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:29:34.643389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:29:35.862365 env[1068]: time="2024-02-08T23:29:35.862277241Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:35.865147 env[1068]: time="2024-02-08T23:29:35.865091162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:35.868139 env[1068]: time="2024-02-08T23:29:35.868103828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:35.871288 env[1068]: time="2024-02-08T23:29:35.871264871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:35.873412 env[1068]: time="2024-02-08T23:29:35.873332875Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 8 23:29:35.886105 env[1068]: time="2024-02-08T23:29:35.886059013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 8 23:29:37.787482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368536870.mount: Deactivated successfully. Feb 8 23:29:38.628587 env[1068]: time="2024-02-08T23:29:38.628471231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:38.630443 env[1068]: time="2024-02-08T23:29:38.630402750Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:38.633001 env[1068]: time="2024-02-08T23:29:38.632950120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:38.635460 env[1068]: time="2024-02-08T23:29:38.635393905Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:38.636006 env[1068]: time="2024-02-08T23:29:38.635970497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 8 23:29:38.655827 env[1068]: time="2024-02-08T23:29:38.655742308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:29:39.566533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3000518111.mount: Deactivated successfully. Feb 8 23:29:39.674186 env[1068]: time="2024-02-08T23:29:39.674055842Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:39.709879 env[1068]: time="2024-02-08T23:29:39.709778600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:39.720031 env[1068]: time="2024-02-08T23:29:39.719972105Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:39.729302 env[1068]: time="2024-02-08T23:29:39.729143329Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:39.731538 env[1068]: time="2024-02-08T23:29:39.731434368Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:29:39.760267 env[1068]: time="2024-02-08T23:29:39.760160206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 8 23:29:40.972546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1478213552.mount: Deactivated successfully. Feb 8 23:29:44.769444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:29:44.770105 systemd[1]: Stopped kubelet.service. Feb 8 23:29:44.776413 systemd[1]: Started kubelet.service. Feb 8 23:29:44.874687 kubelet[1413]: E0208 23:29:44.874625 1413 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:29:44.876415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:29:44.876551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:29:47.810966 update_engine[1060]: I0208 23:29:47.810339 1060 update_attempter.cc:509] Updating boot flags... Feb 8 23:29:48.058258 env[1068]: time="2024-02-08T23:29:48.056312173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:48.060384 env[1068]: time="2024-02-08T23:29:48.060325574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:48.063125 env[1068]: time="2024-02-08T23:29:48.062912492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:48.067082 env[1068]: time="2024-02-08T23:29:48.067058227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:48.067947 env[1068]: time="2024-02-08T23:29:48.067463021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 8 23:29:48.083582 env[1068]: time="2024-02-08T23:29:48.083533467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 8 23:29:48.726934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177953827.mount: Deactivated successfully. Feb 8 23:29:50.241036 env[1068]: time="2024-02-08T23:29:50.240896046Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:50.244637 env[1068]: time="2024-02-08T23:29:50.244609471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:50.247900 env[1068]: time="2024-02-08T23:29:50.247808750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:50.251415 env[1068]: time="2024-02-08T23:29:50.251358343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:50.252374 env[1068]: time="2024-02-08T23:29:50.252313505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 8 23:29:54.351519 systemd[1]: Stopped kubelet.service. Feb 8 23:29:54.383550 systemd[1]: Reloading. Feb 8 23:29:54.527289 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-08T23:29:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:29:54.527329 /usr/lib/systemd/system-generators/torcx-generator[1539]: time="2024-02-08T23:29:54Z" level=info msg="torcx already run" Feb 8 23:29:54.698712 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:29:54.699115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:29:54.725883 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:29:54.833918 systemd[1]: Started kubelet.service. Feb 8 23:29:54.926137 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:29:54.926137 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:29:54.926137 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:29:54.926935 kubelet[1573]: I0208 23:29:54.926168 1573 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:29:55.462755 kubelet[1573]: I0208 23:29:55.462698 1573 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:29:55.462755 kubelet[1573]: I0208 23:29:55.462743 1573 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:29:55.463091 kubelet[1573]: I0208 23:29:55.463062 1573 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:29:55.470884 kubelet[1573]: I0208 23:29:55.470734 1573 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:29:55.471037 kubelet[1573]: I0208 23:29:55.470996 1573 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:29:55.471144 kubelet[1573]: I0208 23:29:55.471099 1573 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:29:55.471312 kubelet[1573]: I0208 23:29:55.471154 1573 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:29:55.471312 kubelet[1573]: I0208 23:29:55.471173 1573 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:29:55.471425 kubelet[1573]: I0208 23:29:55.471328 1573 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:29:55.471627 kubelet[1573]: E0208 23:29:55.471609 1573 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.471768 kubelet[1573]: I0208 23:29:55.471752 1573 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:29:55.474774 kubelet[1573]: I0208 23:29:55.474740 1573 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:29:55.474774 kubelet[1573]: I0208 23:29:55.474772 1573 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:29:55.474949 kubelet[1573]: I0208 23:29:55.474796 1573 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:29:55.474949 kubelet[1573]: I0208 23:29:55.474814 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:29:55.475761 kubelet[1573]: W0208 23:29:55.475709 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-9-f62ee4a992.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.475836 kubelet[1573]: E0208 23:29:55.475774 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-9-f62ee4a992.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.475913 kubelet[1573]: I0208 23:29:55.475887 1573 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:29:55.476251 kubelet[1573]: W0208 23:29:55.476204 1573 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:29:55.476885 kubelet[1573]: I0208 23:29:55.476860 1573 server.go:1168] "Started kubelet" Feb 8 23:29:55.478860 kubelet[1573]: I0208 23:29:55.478835 1573 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:29:55.482990 kubelet[1573]: E0208 23:29:55.482837 1573 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-9-f62ee4a992.novalocal.17b20714fe77a436", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-9-f62ee4a992.novalocal", UID:"ci-3510-3-2-9-f62ee4a992.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-9-f62ee4a992.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 29, 55, 476833334, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 29, 55, 476833334, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.155:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.155:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:29:55.487074 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:29:55.487394 kubelet[1573]: I0208 23:29:55.487369 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:29:55.489470 kubelet[1573]: I0208 23:29:55.489439 1573 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:29:55.490095 kubelet[1573]: E0208 23:29:55.490069 1573 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:29:55.490162 kubelet[1573]: E0208 23:29:55.490105 1573 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:29:55.490325 kubelet[1573]: W0208 23:29:55.490272 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.490398 kubelet[1573]: E0208 23:29:55.490339 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.491844 kubelet[1573]: I0208 23:29:55.491819 1573 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:29:55.492401 kubelet[1573]: I0208 23:29:55.492378 1573 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:29:55.495616 kubelet[1573]: E0208 23:29:55.495579 1573 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-9-f62ee4a992.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="200ms" Feb 8 23:29:55.500798 kubelet[1573]: I0208 23:29:55.500767 1573 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:29:55.520302 kubelet[1573]: I0208 23:29:55.520206 1573 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:29:55.521940 kubelet[1573]: I0208 23:29:55.521924 1573 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:29:55.522074 kubelet[1573]: I0208 23:29:55.522062 1573 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:29:55.522177 kubelet[1573]: I0208 23:29:55.522166 1573 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:29:55.522348 kubelet[1573]: E0208 23:29:55.522337 1573 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:29:55.528576 kubelet[1573]: W0208 23:29:55.528497 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.528761 kubelet[1573]: E0208 23:29:55.528744 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.533863 kubelet[1573]: W0208 23:29:55.533781 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.533863 kubelet[1573]: E0208 23:29:55.533861 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:55.552554 kubelet[1573]: I0208 23:29:55.552512 1573 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:29:55.552554 kubelet[1573]: I0208 23:29:55.552538 1573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:29:55.552554 kubelet[1573]: I0208 23:29:55.552557 1573 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:29:55.557530 kubelet[1573]: I0208 23:29:55.557500 1573 policy_none.go:49] "None policy: Start" Feb 8 23:29:55.558313 kubelet[1573]: I0208 23:29:55.558300 1573 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:29:55.558443 kubelet[1573]: I0208 23:29:55.558431 1573 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:29:55.568477 systemd[1]: Created slice kubepods.slice. Feb 8 23:29:55.578802 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:29:55.588284 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:29:55.596849 kubelet[1573]: I0208 23:29:55.596807 1573 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.597744 kubelet[1573]: I0208 23:29:55.597700 1573 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:29:55.598191 kubelet[1573]: I0208 23:29:55.598151 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:29:55.601589 kubelet[1573]: E0208 23:29:55.601553 1573 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.601907 kubelet[1573]: E0208 23:29:55.601818 1573 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-9-f62ee4a992.novalocal\" not found" Feb 8 23:29:55.623447 kubelet[1573]: I0208 23:29:55.623341 1573 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:29:55.626943 kubelet[1573]: I0208 23:29:55.626884 1573 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:29:55.632296 kubelet[1573]: I0208 23:29:55.631173 1573 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:29:55.641992 systemd[1]: Created slice kubepods-burstable-pod806b98447b30c7972139e42fc5806f3f.slice. Feb 8 23:29:55.656906 systemd[1]: Created slice kubepods-burstable-pode52b88751baad0b57cfd47314c58faa9.slice. Feb 8 23:29:55.663896 systemd[1]: Created slice kubepods-burstable-pod08904e26717a67b7eb5afadc7b854abd.slice. Feb 8 23:29:55.697406 kubelet[1573]: E0208 23:29:55.697333 1573 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-9-f62ee4a992.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="400ms" Feb 8 23:29:55.702315 kubelet[1573]: I0208 23:29:55.702279 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.702511 kubelet[1573]: I0208 23:29:55.702393 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.702629 kubelet[1573]: I0208 23:29:55.702546 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.702777 kubelet[1573]: I0208 23:29:55.702699 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.702946 kubelet[1573]: I0208 23:29:55.702914 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08904e26717a67b7eb5afadc7b854abd-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"08904e26717a67b7eb5afadc7b854abd\") " pod="kube-system/kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.703107 kubelet[1573]: I0208 23:29:55.703024 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.703207 kubelet[1573]: I0208 23:29:55.703163 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.703423 kubelet[1573]: I0208 23:29:55.703345 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.703586 kubelet[1573]: I0208 23:29:55.703485 1573 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.807396 kubelet[1573]: I0208 23:29:55.807332 1573 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.808677 kubelet[1573]: E0208 23:29:55.808648 1573 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:55.954020 env[1068]: time="2024-02-08T23:29:55.953808605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:806b98447b30c7972139e42fc5806f3f,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:55.963255 env[1068]: time="2024-02-08T23:29:55.962549676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:e52b88751baad0b57cfd47314c58faa9,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:55.970660 env[1068]: time="2024-02-08T23:29:55.970590761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:08904e26717a67b7eb5afadc7b854abd,Namespace:kube-system,Attempt:0,}" Feb 8 23:29:56.099756 kubelet[1573]: E0208 23:29:56.099549 1573 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-9-f62ee4a992.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="800ms" Feb 8 23:29:56.212517 kubelet[1573]: I0208 23:29:56.212303 1573 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:56.213042 kubelet[1573]: E0208 23:29:56.212992 1573 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:56.472268 kubelet[1573]: W0208 23:29:56.471612 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-9-f62ee4a992.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.472268 kubelet[1573]: E0208 23:29:56.471739 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.24.4.155:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-9-f62ee4a992.novalocal&limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.563610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2952555978.mount: Deactivated successfully. Feb 8 23:29:56.576906 env[1068]: time="2024-02-08T23:29:56.576785827Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.584430 env[1068]: time="2024-02-08T23:29:56.584339550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.587672 env[1068]: time="2024-02-08T23:29:56.587614030Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.596883 env[1068]: time="2024-02-08T23:29:56.596790902Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.603053 env[1068]: time="2024-02-08T23:29:56.602981048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.606025 env[1068]: time="2024-02-08T23:29:56.605957716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.609387 env[1068]: time="2024-02-08T23:29:56.609311874Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.611073 env[1068]: time="2024-02-08T23:29:56.611017766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.618700 env[1068]: time="2024-02-08T23:29:56.618593881Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.623710 env[1068]: time="2024-02-08T23:29:56.623656617Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.626064 kubelet[1573]: W0208 23:29:56.626004 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.626064 kubelet[1573]: E0208 23:29:56.626055 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.24.4.155:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.626514 env[1068]: time="2024-02-08T23:29:56.626462478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.633485 env[1068]: time="2024-02-08T23:29:56.633383949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:29:56.701369 env[1068]: time="2024-02-08T23:29:56.701242806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:56.701369 env[1068]: time="2024-02-08T23:29:56.701351958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:56.701722 env[1068]: time="2024-02-08T23:29:56.701382455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:56.701806 env[1068]: time="2024-02-08T23:29:56.701758843Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1602d62870ef8483bbbb8059287967f74c1e2999aee4a1b486c8175ade9aaf50 pid=1612 runtime=io.containerd.runc.v2 Feb 8 23:29:56.726451 env[1068]: time="2024-02-08T23:29:56.724702927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:56.726884 env[1068]: time="2024-02-08T23:29:56.724824783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:56.727362 env[1068]: time="2024-02-08T23:29:56.727097106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:29:56.727362 env[1068]: time="2024-02-08T23:29:56.727154281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:29:56.727362 env[1068]: time="2024-02-08T23:29:56.727168868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:56.727362 env[1068]: time="2024-02-08T23:29:56.727028237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:29:56.727982 env[1068]: time="2024-02-08T23:29:56.727484724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95a2f2e910101a30c8fb0afb265414512341d329b2966a40bc4ddc7f577b00a1 pid=1636 runtime=io.containerd.runc.v2 Feb 8 23:29:56.728130 env[1068]: time="2024-02-08T23:29:56.728031447Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09b9451af8a3f2bf33d16ad38c0e557b88d922d29d82c332db5c2fcf4451f9ac pid=1637 runtime=io.containerd.runc.v2 Feb 8 23:29:56.735504 kubelet[1573]: E0208 23:29:56.735335 1573 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-9-f62ee4a992.novalocal.17b20714fe77a436", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-9-f62ee4a992.novalocal", UID:"ci-3510-3-2-9-f62ee4a992.novalocal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-9-f62ee4a992.novalocal"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 29, 55, 476833334, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 29, 55, 476833334, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.24.4.155:6443/api/v1/namespaces/default/events": dial tcp 172.24.4.155:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:29:56.762137 systemd[1]: Started cri-containerd-1602d62870ef8483bbbb8059287967f74c1e2999aee4a1b486c8175ade9aaf50.scope. Feb 8 23:29:56.778169 systemd[1]: Started cri-containerd-09b9451af8a3f2bf33d16ad38c0e557b88d922d29d82c332db5c2fcf4451f9ac.scope. Feb 8 23:29:56.785819 systemd[1]: Started cri-containerd-95a2f2e910101a30c8fb0afb265414512341d329b2966a40bc4ddc7f577b00a1.scope. Feb 8 23:29:56.850762 env[1068]: time="2024-02-08T23:29:56.850683331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:806b98447b30c7972139e42fc5806f3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1602d62870ef8483bbbb8059287967f74c1e2999aee4a1b486c8175ade9aaf50\"" Feb 8 23:29:56.856638 env[1068]: time="2024-02-08T23:29:56.856596271Z" level=info msg="CreateContainer within sandbox \"1602d62870ef8483bbbb8059287967f74c1e2999aee4a1b486c8175ade9aaf50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:29:56.860679 kubelet[1573]: W0208 23:29:56.860593 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.860679 kubelet[1573]: E0208 23:29:56.860646 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.24.4.155:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:56.873308 env[1068]: time="2024-02-08T23:29:56.873230939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:08904e26717a67b7eb5afadc7b854abd,Namespace:kube-system,Attempt:0,} returns sandbox id \"95a2f2e910101a30c8fb0afb265414512341d329b2966a40bc4ddc7f577b00a1\"" Feb 8 23:29:56.876364 env[1068]: time="2024-02-08T23:29:56.876323483Z" level=info msg="CreateContainer within sandbox \"95a2f2e910101a30c8fb0afb265414512341d329b2966a40bc4ddc7f577b00a1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:29:56.884375 env[1068]: time="2024-02-08T23:29:56.884318584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal,Uid:e52b88751baad0b57cfd47314c58faa9,Namespace:kube-system,Attempt:0,} returns sandbox id \"09b9451af8a3f2bf33d16ad38c0e557b88d922d29d82c332db5c2fcf4451f9ac\"" Feb 8 23:29:56.888289 env[1068]: time="2024-02-08T23:29:56.888241657Z" level=info msg="CreateContainer within sandbox \"09b9451af8a3f2bf33d16ad38c0e557b88d922d29d82c332db5c2fcf4451f9ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:29:56.901143 kubelet[1573]: E0208 23:29:56.901017 1573 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.24.4.155:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-9-f62ee4a992.novalocal?timeout=10s\": dial tcp 172.24.4.155:6443: connect: connection refused" interval="1.6s" Feb 8 23:29:57.023290 kubelet[1573]: I0208 23:29:57.023138 1573 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:57.024367 kubelet[1573]: E0208 23:29:57.024340 1573 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.24.4.155:6443/api/v1/nodes\": dial tcp 172.24.4.155:6443: connect: connection refused" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:29:57.034714 env[1068]: time="2024-02-08T23:29:57.034583512Z" level=info msg="CreateContainer within sandbox \"95a2f2e910101a30c8fb0afb265414512341d329b2966a40bc4ddc7f577b00a1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e2f95308fa2ce630d9a56f08b7e938a264a45152975051b49358eb6623aa906\"" Feb 8 23:29:57.036736 env[1068]: time="2024-02-08T23:29:57.036577991Z" level=info msg="StartContainer for \"9e2f95308fa2ce630d9a56f08b7e938a264a45152975051b49358eb6623aa906\"" Feb 8 23:29:57.052013 env[1068]: time="2024-02-08T23:29:57.051873107Z" level=info msg="CreateContainer within sandbox \"09b9451af8a3f2bf33d16ad38c0e557b88d922d29d82c332db5c2fcf4451f9ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a230875404ff331ba3eed310bfbf3207c28f15b6f4802d77f97203672815758\"" Feb 8 23:29:57.053808 env[1068]: time="2024-02-08T23:29:57.053637911Z" level=info msg="CreateContainer within sandbox \"1602d62870ef8483bbbb8059287967f74c1e2999aee4a1b486c8175ade9aaf50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f93f2f3b97c15910939eda15c3cf1d41107854c99a49574677031731e9900996\"" Feb 8 23:29:57.054561 env[1068]: time="2024-02-08T23:29:57.053894336Z" level=info msg="StartContainer for \"1a230875404ff331ba3eed310bfbf3207c28f15b6f4802d77f97203672815758\"" Feb 8 23:29:57.055638 env[1068]: time="2024-02-08T23:29:57.055540911Z" level=info msg="StartContainer for \"f93f2f3b97c15910939eda15c3cf1d41107854c99a49574677031731e9900996\"" Feb 8 23:29:57.092908 systemd[1]: Started cri-containerd-9e2f95308fa2ce630d9a56f08b7e938a264a45152975051b49358eb6623aa906.scope. Feb 8 23:29:57.099387 kubelet[1573]: W0208 23:29:57.098923 1573 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:57.099387 kubelet[1573]: E0208 23:29:57.098993 1573 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.24.4.155:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:57.130385 systemd[1]: Started cri-containerd-f93f2f3b97c15910939eda15c3cf1d41107854c99a49574677031731e9900996.scope. Feb 8 23:29:57.142332 systemd[1]: Started cri-containerd-1a230875404ff331ba3eed310bfbf3207c28f15b6f4802d77f97203672815758.scope. Feb 8 23:29:57.200134 env[1068]: time="2024-02-08T23:29:57.200042485Z" level=info msg="StartContainer for \"9e2f95308fa2ce630d9a56f08b7e938a264a45152975051b49358eb6623aa906\" returns successfully" Feb 8 23:29:57.223135 env[1068]: time="2024-02-08T23:29:57.223054336Z" level=info msg="StartContainer for \"1a230875404ff331ba3eed310bfbf3207c28f15b6f4802d77f97203672815758\" returns successfully" Feb 8 23:29:57.232486 env[1068]: time="2024-02-08T23:29:57.232415552Z" level=info msg="StartContainer for \"f93f2f3b97c15910939eda15c3cf1d41107854c99a49574677031731e9900996\" returns successfully" Feb 8 23:29:57.643528 kubelet[1573]: E0208 23:29:57.643484 1573 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.24.4.155:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.24.4.155:6443: connect: connection refused Feb 8 23:29:58.626527 kubelet[1573]: I0208 23:29:58.626491 1573 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:00.262923 kubelet[1573]: E0208 23:30:00.262885 1573 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-2-9-f62ee4a992.novalocal\" not found" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:00.325083 kubelet[1573]: I0208 23:30:00.325018 1573 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:00.486014 kubelet[1573]: I0208 23:30:00.485969 1573 apiserver.go:52] "Watching apiserver" Feb 8 23:30:00.501328 kubelet[1573]: I0208 23:30:00.501283 1573 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:30:00.532108 kubelet[1573]: I0208 23:30:00.532042 1573 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:30:03.661825 kubelet[1573]: W0208 23:30:03.661778 1573 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:30:03.665842 systemd[1]: Reloading. Feb 8 23:30:03.782487 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2024-02-08T23:30:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:30:03.782874 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2024-02-08T23:30:03Z" level=info msg="torcx already run" Feb 8 23:30:03.897466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:30:03.897650 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:30:03.929534 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:30:04.102180 systemd[1]: Stopping kubelet.service... Feb 8 23:30:04.116775 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:30:04.117114 systemd[1]: Stopped kubelet.service. Feb 8 23:30:04.117174 systemd[1]: kubelet.service: Consumed 1.130s CPU time. Feb 8 23:30:04.119777 systemd[1]: Started kubelet.service. Feb 8 23:30:04.227209 sudo[1918]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:30:04.227866 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:30:04.244447 kubelet[1908]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:30:04.244447 kubelet[1908]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:30:04.244447 kubelet[1908]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:30:04.244871 kubelet[1908]: I0208 23:30:04.244495 1908 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:30:04.252107 kubelet[1908]: I0208 23:30:04.252077 1908 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:30:04.252211 kubelet[1908]: I0208 23:30:04.252200 1908 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:30:04.252633 kubelet[1908]: I0208 23:30:04.252615 1908 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:30:04.254886 kubelet[1908]: I0208 23:30:04.254873 1908 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:30:04.259646 kubelet[1908]: I0208 23:30:04.259607 1908 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:30:04.260035 kubelet[1908]: I0208 23:30:04.260023 1908 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:30:04.260139 kubelet[1908]: I0208 23:30:04.260101 1908 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:30:04.260321 kubelet[1908]: I0208 23:30:04.260304 1908 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:30:04.260470 kubelet[1908]: I0208 23:30:04.260458 1908 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:30:04.260542 kubelet[1908]: I0208 23:30:04.260533 1908 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:30:04.260866 kubelet[1908]: I0208 23:30:04.260807 1908 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:30:04.267084 kubelet[1908]: I0208 23:30:04.267062 1908 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:30:04.267410 kubelet[1908]: I0208 23:30:04.267397 1908 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:30:04.267518 kubelet[1908]: I0208 23:30:04.267507 1908 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:30:04.267602 kubelet[1908]: I0208 23:30:04.267592 1908 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:30:04.281974 kubelet[1908]: I0208 23:30:04.281948 1908 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:30:04.290836 kubelet[1908]: I0208 23:30:04.290814 1908 server.go:1168] "Started kubelet" Feb 8 23:30:04.292554 kubelet[1908]: I0208 23:30:04.292528 1908 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:30:04.293407 kubelet[1908]: I0208 23:30:04.293391 1908 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:30:04.294860 kubelet[1908]: I0208 23:30:04.294844 1908 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:30:04.299736 kubelet[1908]: I0208 23:30:04.296070 1908 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:30:04.303666 kubelet[1908]: I0208 23:30:04.303590 1908 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:30:04.304697 kubelet[1908]: I0208 23:30:04.304680 1908 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:30:04.305871 kubelet[1908]: E0208 23:30:04.305839 1908 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:30:04.305993 kubelet[1908]: E0208 23:30:04.305983 1908 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:30:04.423857 kubelet[1908]: I0208 23:30:04.423832 1908 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.426070 kubelet[1908]: I0208 23:30:04.426036 1908 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:30:04.428851 kubelet[1908]: I0208 23:30:04.428835 1908 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:30:04.429069 kubelet[1908]: I0208 23:30:04.429060 1908 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:30:04.429194 kubelet[1908]: I0208 23:30:04.429183 1908 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:30:04.429488 kubelet[1908]: E0208 23:30:04.429477 1908 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:30:04.451082 kubelet[1908]: I0208 23:30:04.451055 1908 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.451905 kubelet[1908]: I0208 23:30:04.451891 1908 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.530839 kubelet[1908]: E0208 23:30:04.530805 1908 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 8 23:30:04.553463 kubelet[1908]: I0208 23:30:04.553438 1908 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:30:04.553649 kubelet[1908]: I0208 23:30:04.553639 1908 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:30:04.553789 kubelet[1908]: I0208 23:30:04.553778 1908 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:30:04.555527 kubelet[1908]: I0208 23:30:04.555514 1908 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:30:04.555630 kubelet[1908]: I0208 23:30:04.555621 1908 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:30:04.555691 kubelet[1908]: I0208 23:30:04.555682 1908 policy_none.go:49] "None policy: Start" Feb 8 23:30:04.556411 kubelet[1908]: I0208 23:30:04.556398 1908 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:30:04.556511 kubelet[1908]: I0208 23:30:04.556501 1908 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:30:04.557096 kubelet[1908]: I0208 23:30:04.557083 1908 state_mem.go:75] "Updated machine memory state" Feb 8 23:30:04.561805 kubelet[1908]: I0208 23:30:04.561785 1908 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:30:04.563268 kubelet[1908]: I0208 23:30:04.563244 1908 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:30:04.732017 kubelet[1908]: I0208 23:30:04.731965 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:04.732334 kubelet[1908]: I0208 23:30:04.732318 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:04.732473 kubelet[1908]: I0208 23:30:04.732457 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:04.747064 kubelet[1908]: W0208 23:30:04.747029 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:30:04.747492 kubelet[1908]: W0208 23:30:04.747474 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:30:04.757239 kubelet[1908]: W0208 23:30:04.757181 1908 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:30:04.759089 kubelet[1908]: E0208 23:30:04.759063 1908 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.813952 kubelet[1908]: I0208 23:30:04.813764 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08904e26717a67b7eb5afadc7b854abd-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"08904e26717a67b7eb5afadc7b854abd\") " pod="kube-system/kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.814576 kubelet[1908]: I0208 23:30:04.814549 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.814962 kubelet[1908]: I0208 23:30:04.814929 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.815336 kubelet[1908]: I0208 23:30:04.815308 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/806b98447b30c7972139e42fc5806f3f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"806b98447b30c7972139e42fc5806f3f\") " pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.816275 kubelet[1908]: I0208 23:30:04.816245 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.817741 kubelet[1908]: I0208 23:30:04.816523 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.818161 kubelet[1908]: I0208 23:30:04.818129 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.818578 kubelet[1908]: I0208 23:30:04.818454 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:04.818913 kubelet[1908]: I0208 23:30:04.818821 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e52b88751baad0b57cfd47314c58faa9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal\" (UID: \"e52b88751baad0b57cfd47314c58faa9\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" Feb 8 23:30:05.017946 sudo[1918]: pam_unix(sudo:session): session closed for user root Feb 8 23:30:05.269961 kubelet[1908]: I0208 23:30:05.269914 1908 apiserver.go:52] "Watching apiserver" Feb 8 23:30:05.405818 kubelet[1908]: I0208 23:30:05.405777 1908 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:30:05.419982 kubelet[1908]: I0208 23:30:05.419911 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-9-f62ee4a992.novalocal" podStartSLOduration=2.417883469 podCreationTimestamp="2024-02-08 23:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:05.415740556 +0000 UTC m=+1.285574681" watchObservedRunningTime="2024-02-08 23:30:05.417883469 +0000 UTC m=+1.287717574" Feb 8 23:30:05.423054 kubelet[1908]: I0208 23:30:05.423003 1908 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:30:05.436128 kubelet[1908]: I0208 23:30:05.436077 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-9-f62ee4a992.novalocal" podStartSLOduration=1.436010587 podCreationTimestamp="2024-02-08 23:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:05.433573085 +0000 UTC m=+1.303407190" watchObservedRunningTime="2024-02-08 23:30:05.436010587 +0000 UTC m=+1.305844692" Feb 8 23:30:05.458260 kubelet[1908]: I0208 23:30:05.458204 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-9-f62ee4a992.novalocal" podStartSLOduration=1.458139321 podCreationTimestamp="2024-02-08 23:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:05.445947313 +0000 UTC m=+1.315781428" watchObservedRunningTime="2024-02-08 23:30:05.458139321 +0000 UTC m=+1.327973416" Feb 8 23:30:07.093614 sudo[1163]: pam_unix(sudo:session): session closed for user root Feb 8 23:30:07.302277 sshd[1159]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:07.307533 systemd[1]: sshd@4-172.24.4.155:22-172.24.4.1:35404.service: Deactivated successfully. Feb 8 23:30:07.309152 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:30:07.309577 systemd[1]: session-5.scope: Consumed 6.471s CPU time. Feb 8 23:30:07.310791 systemd-logind[1059]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:30:07.314034 systemd-logind[1059]: Removed session 5. Feb 8 23:30:16.006352 kubelet[1908]: I0208 23:30:16.006293 1908 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:30:16.007343 env[1068]: time="2024-02-08T23:30:16.007169078Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:30:16.008091 kubelet[1908]: I0208 23:30:16.008065 1908 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:30:16.736674 kubelet[1908]: I0208 23:30:16.736637 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:16.748445 kubelet[1908]: W0208 23:30:16.748416 1908 reflector.go:533] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-9-f62ee4a992.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-9-f62ee4a992.novalocal' and this object Feb 8 23:30:16.748707 kubelet[1908]: E0208 23:30:16.748695 1908 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-9-f62ee4a992.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-9-f62ee4a992.novalocal' and this object Feb 8 23:30:16.749288 systemd[1]: Created slice kubepods-besteffort-pod72839b10_6d02_417a_be1b_b3c5ab949f08.slice. Feb 8 23:30:16.751379 kubelet[1908]: W0208 23:30:16.751362 1908 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-9-f62ee4a992.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-9-f62ee4a992.novalocal' and this object Feb 8 23:30:16.751502 kubelet[1908]: E0208 23:30:16.751479 1908 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-9-f62ee4a992.novalocal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-9-f62ee4a992.novalocal' and this object Feb 8 23:30:16.776441 kubelet[1908]: I0208 23:30:16.776407 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:16.783412 systemd[1]: Created slice kubepods-burstable-poddd847ee7_2204_46a5_b620_dc3df38a981b.slice. Feb 8 23:30:16.808317 kubelet[1908]: I0208 23:30:16.808269 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cni-path\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808569 kubelet[1908]: I0208 23:30:16.808340 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-cgroup\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808569 kubelet[1908]: I0208 23:30:16.808376 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b78ts\" (UniqueName: \"kubernetes.io/projected/72839b10-6d02-417a-be1b-b3c5ab949f08-kube-api-access-b78ts\") pod \"kube-proxy-z7j44\" (UID: \"72839b10-6d02-417a-be1b-b3c5ab949f08\") " pod="kube-system/kube-proxy-z7j44" Feb 8 23:30:16.808569 kubelet[1908]: I0208 23:30:16.808421 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72839b10-6d02-417a-be1b-b3c5ab949f08-lib-modules\") pod \"kube-proxy-z7j44\" (UID: \"72839b10-6d02-417a-be1b-b3c5ab949f08\") " pod="kube-system/kube-proxy-z7j44" Feb 8 23:30:16.808569 kubelet[1908]: I0208 23:30:16.808450 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-hubble-tls\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808569 kubelet[1908]: I0208 23:30:16.808495 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/72839b10-6d02-417a-be1b-b3c5ab949f08-kube-proxy\") pod \"kube-proxy-z7j44\" (UID: \"72839b10-6d02-417a-be1b-b3c5ab949f08\") " pod="kube-system/kube-proxy-z7j44" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808523 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmpb\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-kube-api-access-lnmpb\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808564 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-bpf-maps\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808595 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-lib-modules\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808623 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd847ee7-2204-46a5-b620-dc3df38a981b-clustermesh-secrets\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808678 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-etc-cni-netd\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808743 kubelet[1908]: I0208 23:30:16.808706 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-config-path\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808978 kubelet[1908]: I0208 23:30:16.808750 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-kernel\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808978 kubelet[1908]: I0208 23:30:16.808776 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-run\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808978 kubelet[1908]: I0208 23:30:16.808904 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-xtables-lock\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.808978 kubelet[1908]: I0208 23:30:16.808961 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72839b10-6d02-417a-be1b-b3c5ab949f08-xtables-lock\") pod \"kube-proxy-z7j44\" (UID: \"72839b10-6d02-417a-be1b-b3c5ab949f08\") " pod="kube-system/kube-proxy-z7j44" Feb 8 23:30:16.809119 kubelet[1908]: I0208 23:30:16.809089 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-hostproc\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.809254 kubelet[1908]: I0208 23:30:16.809127 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-net\") pod \"cilium-psj8j\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " pod="kube-system/cilium-psj8j" Feb 8 23:30:16.947836 kubelet[1908]: I0208 23:30:16.946801 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:16.953993 systemd[1]: Created slice kubepods-besteffort-podbf45b0f7_7c3a_4e1e_b509_ef5ba4bb83a3.slice. Feb 8 23:30:17.011167 kubelet[1908]: I0208 23:30:17.011031 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndxq5\" (UniqueName: \"kubernetes.io/projected/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-kube-api-access-ndxq5\") pod \"cilium-operator-574c4bb98d-cdz7s\" (UID: \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\") " pod="kube-system/cilium-operator-574c4bb98d-cdz7s" Feb 8 23:30:17.011167 kubelet[1908]: I0208 23:30:17.011135 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-cilium-config-path\") pod \"cilium-operator-574c4bb98d-cdz7s\" (UID: \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\") " pod="kube-system/cilium-operator-574c4bb98d-cdz7s" Feb 8 23:30:17.691192 env[1068]: time="2024-02-08T23:30:17.690389041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psj8j,Uid:dd847ee7-2204-46a5-b620-dc3df38a981b,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:17.718671 env[1068]: time="2024-02-08T23:30:17.718303916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:17.718671 env[1068]: time="2024-02-08T23:30:17.718393127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:17.718671 env[1068]: time="2024-02-08T23:30:17.718424586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:17.719335 env[1068]: time="2024-02-08T23:30:17.719133264Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07 pid=1992 runtime=io.containerd.runc.v2 Feb 8 23:30:17.745462 systemd[1]: Started cri-containerd-01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07.scope. Feb 8 23:30:17.810419 env[1068]: time="2024-02-08T23:30:17.810364947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-psj8j,Uid:dd847ee7-2204-46a5-b620-dc3df38a981b,Namespace:kube-system,Attempt:0,} returns sandbox id \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\"" Feb 8 23:30:17.814672 env[1068]: time="2024-02-08T23:30:17.812680551Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:30:17.859007 env[1068]: time="2024-02-08T23:30:17.858937064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-cdz7s,Uid:bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:17.897356 env[1068]: time="2024-02-08T23:30:17.897091602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:17.897356 env[1068]: time="2024-02-08T23:30:17.897193491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:17.898104 env[1068]: time="2024-02-08T23:30:17.897730762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:17.898622 env[1068]: time="2024-02-08T23:30:17.898457759Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa pid=2035 runtime=io.containerd.runc.v2 Feb 8 23:30:17.912068 kubelet[1908]: E0208 23:30:17.912018 1908 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:30:17.921353 kubelet[1908]: E0208 23:30:17.912117 1908 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72839b10-6d02-417a-be1b-b3c5ab949f08-kube-proxy podName:72839b10-6d02-417a-be1b-b3c5ab949f08 nodeName:}" failed. No retries permitted until 2024-02-08 23:30:18.412091581 +0000 UTC m=+14.281925676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/72839b10-6d02-417a-be1b-b3c5ab949f08-kube-proxy") pod "kube-proxy-z7j44" (UID: "72839b10-6d02-417a-be1b-b3c5ab949f08") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:30:17.948270 systemd[1]: Started cri-containerd-de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa.scope. Feb 8 23:30:17.950101 systemd[1]: run-containerd-runc-k8s.io-de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa-runc.Bnj0jp.mount: Deactivated successfully. Feb 8 23:30:18.012916 env[1068]: time="2024-02-08T23:30:18.012852213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-cdz7s,Uid:bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\"" Feb 8 23:30:18.561408 env[1068]: time="2024-02-08T23:30:18.561317313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7j44,Uid:72839b10-6d02-417a-be1b-b3c5ab949f08,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:18.604254 env[1068]: time="2024-02-08T23:30:18.603949410Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:18.604531 env[1068]: time="2024-02-08T23:30:18.604273159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:18.604531 env[1068]: time="2024-02-08T23:30:18.604352344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:18.605070 env[1068]: time="2024-02-08T23:30:18.604908819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce3d3e604970235622b7f5eeb0635cb95d18dbe58379e7e64138ea398c6f66fc pid=2077 runtime=io.containerd.runc.v2 Feb 8 23:30:18.643021 systemd[1]: Started cri-containerd-ce3d3e604970235622b7f5eeb0635cb95d18dbe58379e7e64138ea398c6f66fc.scope. Feb 8 23:30:18.695514 env[1068]: time="2024-02-08T23:30:18.695434569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7j44,Uid:72839b10-6d02-417a-be1b-b3c5ab949f08,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3d3e604970235622b7f5eeb0635cb95d18dbe58379e7e64138ea398c6f66fc\"" Feb 8 23:30:18.702978 env[1068]: time="2024-02-08T23:30:18.702911910Z" level=info msg="CreateContainer within sandbox \"ce3d3e604970235622b7f5eeb0635cb95d18dbe58379e7e64138ea398c6f66fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:30:18.736650 env[1068]: time="2024-02-08T23:30:18.736479068Z" level=info msg="CreateContainer within sandbox \"ce3d3e604970235622b7f5eeb0635cb95d18dbe58379e7e64138ea398c6f66fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb6485f85d22c13962e34b7423b67c07052439479a1656b059662a7cf7e438ae\"" Feb 8 23:30:18.740714 env[1068]: time="2024-02-08T23:30:18.740492080Z" level=info msg="StartContainer for \"fb6485f85d22c13962e34b7423b67c07052439479a1656b059662a7cf7e438ae\"" Feb 8 23:30:18.776185 systemd[1]: Started cri-containerd-fb6485f85d22c13962e34b7423b67c07052439479a1656b059662a7cf7e438ae.scope. Feb 8 23:30:18.836953 env[1068]: time="2024-02-08T23:30:18.835882321Z" level=info msg="StartContainer for \"fb6485f85d22c13962e34b7423b67c07052439479a1656b059662a7cf7e438ae\" returns successfully" Feb 8 23:30:19.588046 kubelet[1908]: I0208 23:30:19.587622 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z7j44" podStartSLOduration=3.587549541 podCreationTimestamp="2024-02-08 23:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:19.577349115 +0000 UTC m=+15.447183250" watchObservedRunningTime="2024-02-08 23:30:19.587549541 +0000 UTC m=+15.457383646" Feb 8 23:30:25.130963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427253353.mount: Deactivated successfully. Feb 8 23:30:29.633083 env[1068]: time="2024-02-08T23:30:29.632938196Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:29.640406 env[1068]: time="2024-02-08T23:30:29.640331546Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:29.646731 env[1068]: time="2024-02-08T23:30:29.646319709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:29.650358 env[1068]: time="2024-02-08T23:30:29.648471346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:30:29.651899 env[1068]: time="2024-02-08T23:30:29.651838468Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:30:29.665369 env[1068]: time="2024-02-08T23:30:29.665287301Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:30:29.696733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615683375.mount: Deactivated successfully. Feb 8 23:30:29.706249 env[1068]: time="2024-02-08T23:30:29.706152395Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\"" Feb 8 23:30:29.709764 env[1068]: time="2024-02-08T23:30:29.707423751Z" level=info msg="StartContainer for \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\"" Feb 8 23:30:29.737669 systemd[1]: Started cri-containerd-bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7.scope. Feb 8 23:30:29.787931 env[1068]: time="2024-02-08T23:30:29.787851592Z" level=info msg="StartContainer for \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\" returns successfully" Feb 8 23:30:29.795329 systemd[1]: cri-containerd-bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7.scope: Deactivated successfully. Feb 8 23:30:30.104510 env[1068]: time="2024-02-08T23:30:30.104197068Z" level=info msg="shim disconnected" id=bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7 Feb 8 23:30:30.104940 env[1068]: time="2024-02-08T23:30:30.104512129Z" level=warning msg="cleaning up after shim disconnected" id=bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7 namespace=k8s.io Feb 8 23:30:30.104940 env[1068]: time="2024-02-08T23:30:30.104556983Z" level=info msg="cleaning up dead shim" Feb 8 23:30:30.132354 env[1068]: time="2024-02-08T23:30:30.132264481Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2309 runtime=io.containerd.runc.v2\n" Feb 8 23:30:30.599117 env[1068]: time="2024-02-08T23:30:30.596275164Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:30:30.693499 systemd[1]: run-containerd-runc-k8s.io-bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7-runc.LXdiiC.mount: Deactivated successfully. Feb 8 23:30:30.693759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7-rootfs.mount: Deactivated successfully. Feb 8 23:30:31.138876 env[1068]: time="2024-02-08T23:30:31.138206641Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\"" Feb 8 23:30:31.145100 env[1068]: time="2024-02-08T23:30:31.145029887Z" level=info msg="StartContainer for \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\"" Feb 8 23:30:31.207402 systemd[1]: Started cri-containerd-170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff.scope. Feb 8 23:30:31.258288 env[1068]: time="2024-02-08T23:30:31.258064968Z" level=info msg="StartContainer for \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\" returns successfully" Feb 8 23:30:31.264861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:30:31.265316 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:30:31.265855 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:30:31.269308 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:30:31.276140 systemd[1]: cri-containerd-170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff.scope: Deactivated successfully. Feb 8 23:30:31.305775 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:30:31.318283 env[1068]: time="2024-02-08T23:30:31.318173520Z" level=info msg="shim disconnected" id=170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff Feb 8 23:30:31.318283 env[1068]: time="2024-02-08T23:30:31.318250968Z" level=warning msg="cleaning up after shim disconnected" id=170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff namespace=k8s.io Feb 8 23:30:31.318283 env[1068]: time="2024-02-08T23:30:31.318267635Z" level=info msg="cleaning up dead shim" Feb 8 23:30:31.329319 env[1068]: time="2024-02-08T23:30:31.329263607Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2375 runtime=io.containerd.runc.v2\n" Feb 8 23:30:31.611659 env[1068]: time="2024-02-08T23:30:31.611575663Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:30:31.650588 env[1068]: time="2024-02-08T23:30:31.650495013Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\"" Feb 8 23:30:31.655794 env[1068]: time="2024-02-08T23:30:31.655329288Z" level=info msg="StartContainer for \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\"" Feb 8 23:30:31.692161 systemd[1]: run-containerd-runc-k8s.io-170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff-runc.4PtgsP.mount: Deactivated successfully. Feb 8 23:30:31.692541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff-rootfs.mount: Deactivated successfully. Feb 8 23:30:31.707066 systemd[1]: Started cri-containerd-c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46.scope. Feb 8 23:30:31.726956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583091683.mount: Deactivated successfully. Feb 8 23:30:31.756857 systemd[1]: cri-containerd-c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46.scope: Deactivated successfully. Feb 8 23:30:31.771266 env[1068]: time="2024-02-08T23:30:31.771175355Z" level=info msg="StartContainer for \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\" returns successfully" Feb 8 23:30:31.797192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46-rootfs.mount: Deactivated successfully. Feb 8 23:30:31.813801 env[1068]: time="2024-02-08T23:30:31.813753668Z" level=info msg="shim disconnected" id=c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46 Feb 8 23:30:31.814023 env[1068]: time="2024-02-08T23:30:31.814001550Z" level=warning msg="cleaning up after shim disconnected" id=c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46 namespace=k8s.io Feb 8 23:30:31.814096 env[1068]: time="2024-02-08T23:30:31.814082023Z" level=info msg="cleaning up dead shim" Feb 8 23:30:31.824145 env[1068]: time="2024-02-08T23:30:31.824070973Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2431 runtime=io.containerd.runc.v2\n" Feb 8 23:30:32.613545 env[1068]: time="2024-02-08T23:30:32.613476188Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:30:32.652041 env[1068]: time="2024-02-08T23:30:32.651970367Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\"" Feb 8 23:30:32.653264 env[1068]: time="2024-02-08T23:30:32.653231938Z" level=info msg="StartContainer for \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\"" Feb 8 23:30:32.871948 systemd[1]: Started cri-containerd-8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178.scope. Feb 8 23:30:32.925979 systemd[1]: cri-containerd-8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178.scope: Deactivated successfully. Feb 8 23:30:32.934445 env[1068]: time="2024-02-08T23:30:32.934396033Z" level=info msg="StartContainer for \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\" returns successfully" Feb 8 23:30:32.935021 env[1068]: time="2024-02-08T23:30:32.930012174Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd847ee7_2204_46a5_b620_dc3df38a981b.slice/cri-containerd-8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178.scope/memory.events\": no such file or directory" Feb 8 23:30:32.963788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178-rootfs.mount: Deactivated successfully. Feb 8 23:30:33.101081 env[1068]: time="2024-02-08T23:30:33.100947904Z" level=info msg="shim disconnected" id=8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178 Feb 8 23:30:33.101081 env[1068]: time="2024-02-08T23:30:33.101072492Z" level=warning msg="cleaning up after shim disconnected" id=8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178 namespace=k8s.io Feb 8 23:30:33.101759 env[1068]: time="2024-02-08T23:30:33.101098185Z" level=info msg="cleaning up dead shim" Feb 8 23:30:33.130160 env[1068]: time="2024-02-08T23:30:33.129548606Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2489 runtime=io.containerd.runc.v2\n" Feb 8 23:30:33.300934 env[1068]: time="2024-02-08T23:30:33.300857026Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:33.303560 env[1068]: time="2024-02-08T23:30:33.303523560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:33.306269 env[1068]: time="2024-02-08T23:30:33.306204498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:30:33.306580 env[1068]: time="2024-02-08T23:30:33.306537163Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:30:33.313010 env[1068]: time="2024-02-08T23:30:33.312963136Z" level=info msg="CreateContainer within sandbox \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:30:33.338110 env[1068]: time="2024-02-08T23:30:33.338038391Z" level=info msg="CreateContainer within sandbox \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\"" Feb 8 23:30:33.339072 env[1068]: time="2024-02-08T23:30:33.339041136Z" level=info msg="StartContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\"" Feb 8 23:30:33.364162 systemd[1]: Started cri-containerd-fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855.scope. Feb 8 23:30:33.416914 env[1068]: time="2024-02-08T23:30:33.416444283Z" level=info msg="StartContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" returns successfully" Feb 8 23:30:33.613604 env[1068]: time="2024-02-08T23:30:33.613555481Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:30:33.638824 kubelet[1908]: I0208 23:30:33.638208 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-cdz7s" podStartSLOduration=2.345154012 podCreationTimestamp="2024-02-08 23:30:16 +0000 UTC" firstStartedPulling="2024-02-08 23:30:18.014399307 +0000 UTC m=+13.884233412" lastFinishedPulling="2024-02-08 23:30:33.307395938 +0000 UTC m=+29.177230033" observedRunningTime="2024-02-08 23:30:33.637616771 +0000 UTC m=+29.507450866" watchObservedRunningTime="2024-02-08 23:30:33.638150633 +0000 UTC m=+29.507984738" Feb 8 23:30:33.646351 env[1068]: time="2024-02-08T23:30:33.646299815Z" level=info msg="CreateContainer within sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\"" Feb 8 23:30:33.647441 env[1068]: time="2024-02-08T23:30:33.647413034Z" level=info msg="StartContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\"" Feb 8 23:30:33.674078 systemd[1]: Started cri-containerd-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969.scope. Feb 8 23:30:33.766571 env[1068]: time="2024-02-08T23:30:33.766503974Z" level=info msg="StartContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" returns successfully" Feb 8 23:30:33.789138 systemd[1]: run-containerd-runc-k8s.io-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969-runc.YwqYCN.mount: Deactivated successfully. Feb 8 23:30:34.041066 kubelet[1908]: I0208 23:30:34.040481 1908 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:30:34.127679 kubelet[1908]: I0208 23:30:34.127624 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:34.129085 kubelet[1908]: I0208 23:30:34.128948 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:30:34.136716 systemd[1]: Created slice kubepods-burstable-pod52ca02af_6bad_4417_8fdc_96d45fcee9b4.slice. Feb 8 23:30:34.150403 systemd[1]: Created slice kubepods-burstable-podf1e96ad9_9773_43ab_93c0_b72edce77044.slice. Feb 8 23:30:34.157554 kubelet[1908]: I0208 23:30:34.157529 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpdv\" (UniqueName: \"kubernetes.io/projected/52ca02af-6bad-4417-8fdc-96d45fcee9b4-kube-api-access-wcpdv\") pod \"coredns-5d78c9869d-9d4xh\" (UID: \"52ca02af-6bad-4417-8fdc-96d45fcee9b4\") " pod="kube-system/coredns-5d78c9869d-9d4xh" Feb 8 23:30:34.157770 kubelet[1908]: I0208 23:30:34.157758 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52ca02af-6bad-4417-8fdc-96d45fcee9b4-config-volume\") pod \"coredns-5d78c9869d-9d4xh\" (UID: \"52ca02af-6bad-4417-8fdc-96d45fcee9b4\") " pod="kube-system/coredns-5d78c9869d-9d4xh" Feb 8 23:30:34.260049 kubelet[1908]: I0208 23:30:34.259950 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f1e96ad9-9773-43ab-93c0-b72edce77044-config-volume\") pod \"coredns-5d78c9869d-mbd26\" (UID: \"f1e96ad9-9773-43ab-93c0-b72edce77044\") " pod="kube-system/coredns-5d78c9869d-mbd26" Feb 8 23:30:34.260544 kubelet[1908]: I0208 23:30:34.260530 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5z9s\" (UniqueName: \"kubernetes.io/projected/f1e96ad9-9773-43ab-93c0-b72edce77044-kube-api-access-n5z9s\") pod \"coredns-5d78c9869d-mbd26\" (UID: \"f1e96ad9-9773-43ab-93c0-b72edce77044\") " pod="kube-system/coredns-5d78c9869d-mbd26" Feb 8 23:30:34.447206 env[1068]: time="2024-02-08T23:30:34.446605622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-9d4xh,Uid:52ca02af-6bad-4417-8fdc-96d45fcee9b4,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:34.759534 env[1068]: time="2024-02-08T23:30:34.759353691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-mbd26,Uid:f1e96ad9-9773-43ab-93c0-b72edce77044,Namespace:kube-system,Attempt:0,}" Feb 8 23:30:36.973303 systemd-networkd[976]: cilium_host: Link UP Feb 8 23:30:36.973912 systemd-networkd[976]: cilium_net: Link UP Feb 8 23:30:36.978183 systemd-networkd[976]: cilium_net: Gained carrier Feb 8 23:30:36.981832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:30:36.981911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:30:36.982031 systemd-networkd[976]: cilium_host: Gained carrier Feb 8 23:30:37.033988 systemd-networkd[976]: cilium_net: Gained IPv6LL Feb 8 23:30:37.101987 systemd-networkd[976]: cilium_vxlan: Link UP Feb 8 23:30:37.101999 systemd-networkd[976]: cilium_vxlan: Gained carrier Feb 8 23:30:37.282672 systemd-networkd[976]: cilium_host: Gained IPv6LL Feb 8 23:30:38.490649 systemd-networkd[976]: cilium_vxlan: Gained IPv6LL Feb 8 23:30:38.659274 kernel: NET: Registered PF_ALG protocol family Feb 8 23:30:39.619032 systemd-networkd[976]: lxc_health: Link UP Feb 8 23:30:39.631249 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:30:39.631367 systemd-networkd[976]: lxc_health: Gained carrier Feb 8 23:30:39.717595 kubelet[1908]: I0208 23:30:39.717548 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-psj8j" podStartSLOduration=11.878226114 podCreationTimestamp="2024-02-08 23:30:16 +0000 UTC" firstStartedPulling="2024-02-08 23:30:17.81190097 +0000 UTC m=+13.681735065" lastFinishedPulling="2024-02-08 23:30:29.6511489 +0000 UTC m=+25.520983065" observedRunningTime="2024-02-08 23:30:34.773476872 +0000 UTC m=+30.643310977" watchObservedRunningTime="2024-02-08 23:30:39.717474114 +0000 UTC m=+35.587308209" Feb 8 23:30:39.822392 systemd-networkd[976]: lxcfb9fd5e2ce26: Link UP Feb 8 23:30:39.831269 kernel: eth0: renamed from tmpf0798 Feb 8 23:30:39.836503 systemd-networkd[976]: lxcfb9fd5e2ce26: Gained carrier Feb 8 23:30:39.839925 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcfb9fd5e2ce26: link becomes ready Feb 8 23:30:40.046184 systemd-networkd[976]: lxc1cda6e63e63c: Link UP Feb 8 23:30:40.052275 kernel: eth0: renamed from tmp419af Feb 8 23:30:40.062713 systemd-networkd[976]: lxc1cda6e63e63c: Gained carrier Feb 8 23:30:40.063333 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1cda6e63e63c: link becomes ready Feb 8 23:30:40.678981 kubelet[1908]: I0208 23:30:40.678949 1908 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:30:41.242473 systemd-networkd[976]: lxc_health: Gained IPv6LL Feb 8 23:30:41.242823 systemd-networkd[976]: lxcfb9fd5e2ce26: Gained IPv6LL Feb 8 23:30:42.039577 systemd-networkd[976]: lxc1cda6e63e63c: Gained IPv6LL Feb 8 23:30:44.622654 env[1068]: time="2024-02-08T23:30:44.622531468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:44.622654 env[1068]: time="2024-02-08T23:30:44.622587584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:44.622654 env[1068]: time="2024-02-08T23:30:44.622601268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:44.623380 env[1068]: time="2024-02-08T23:30:44.623327535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7 pid=3071 runtime=io.containerd.runc.v2 Feb 8 23:30:44.653968 systemd[1]: run-containerd-runc-k8s.io-419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7-runc.C3vqC7.mount: Deactivated successfully. Feb 8 23:30:44.662520 systemd[1]: Started cri-containerd-419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7.scope. Feb 8 23:30:44.716207 env[1068]: time="2024-02-08T23:30:44.716089092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:30:44.716777 env[1068]: time="2024-02-08T23:30:44.716138607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:30:44.716777 env[1068]: time="2024-02-08T23:30:44.716171133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:30:44.717374 env[1068]: time="2024-02-08T23:30:44.717247932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0798ad16aef9b6a8bf425f53f00075cc28882f93e7b6f999d8086fe9642544b pid=3105 runtime=io.containerd.runc.v2 Feb 8 23:30:44.765587 systemd[1]: Started cri-containerd-f0798ad16aef9b6a8bf425f53f00075cc28882f93e7b6f999d8086fe9642544b.scope. Feb 8 23:30:44.774903 env[1068]: time="2024-02-08T23:30:44.774857790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-9d4xh,Uid:52ca02af-6bad-4417-8fdc-96d45fcee9b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7\"" Feb 8 23:30:44.782950 env[1068]: time="2024-02-08T23:30:44.782887440Z" level=info msg="CreateContainer within sandbox \"419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:30:44.850449 env[1068]: time="2024-02-08T23:30:44.850385540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-mbd26,Uid:f1e96ad9-9773-43ab-93c0-b72edce77044,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0798ad16aef9b6a8bf425f53f00075cc28882f93e7b6f999d8086fe9642544b\"" Feb 8 23:30:44.856030 env[1068]: time="2024-02-08T23:30:44.855956667Z" level=info msg="CreateContainer within sandbox \"f0798ad16aef9b6a8bf425f53f00075cc28882f93e7b6f999d8086fe9642544b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:30:44.871400 env[1068]: time="2024-02-08T23:30:44.871358027Z" level=info msg="CreateContainer within sandbox \"419af383f3160f78a3c299576d40d6eb3433abf7a799968cb08d652c839365e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12cc6081554d711eb22be1f4ba3d95934a7110383964f642a4d5e56491c16350\"" Feb 8 23:30:44.872332 env[1068]: time="2024-02-08T23:30:44.872309332Z" level=info msg="StartContainer for \"12cc6081554d711eb22be1f4ba3d95934a7110383964f642a4d5e56491c16350\"" Feb 8 23:30:44.888300 env[1068]: time="2024-02-08T23:30:44.888132496Z" level=info msg="CreateContainer within sandbox \"f0798ad16aef9b6a8bf425f53f00075cc28882f93e7b6f999d8086fe9642544b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"661529919afe746449138e182e589b48144ae65da4a5e7ed9557457ddf266096\"" Feb 8 23:30:44.890529 env[1068]: time="2024-02-08T23:30:44.890493240Z" level=info msg="StartContainer for \"661529919afe746449138e182e589b48144ae65da4a5e7ed9557457ddf266096\"" Feb 8 23:30:44.919061 systemd[1]: Started cri-containerd-12cc6081554d711eb22be1f4ba3d95934a7110383964f642a4d5e56491c16350.scope. Feb 8 23:30:44.941487 systemd[1]: Started cri-containerd-661529919afe746449138e182e589b48144ae65da4a5e7ed9557457ddf266096.scope. Feb 8 23:30:44.998492 env[1068]: time="2024-02-08T23:30:44.998427754Z" level=info msg="StartContainer for \"12cc6081554d711eb22be1f4ba3d95934a7110383964f642a4d5e56491c16350\" returns successfully" Feb 8 23:30:45.014480 env[1068]: time="2024-02-08T23:30:45.014426015Z" level=info msg="StartContainer for \"661529919afe746449138e182e589b48144ae65da4a5e7ed9557457ddf266096\" returns successfully" Feb 8 23:30:45.693900 kubelet[1908]: I0208 23:30:45.693779 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-mbd26" podStartSLOduration=29.693665868 podCreationTimestamp="2024-02-08 23:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:45.691199033 +0000 UTC m=+41.561033178" watchObservedRunningTime="2024-02-08 23:30:45.693665868 +0000 UTC m=+41.563500013" Feb 8 23:30:45.719700 kubelet[1908]: I0208 23:30:45.719625 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-9d4xh" podStartSLOduration=29.719579803 podCreationTimestamp="2024-02-08 23:30:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:30:45.71944998 +0000 UTC m=+41.589284145" watchObservedRunningTime="2024-02-08 23:30:45.719579803 +0000 UTC m=+41.589413898" Feb 8 23:30:55.863131 systemd[1]: Started sshd@5-172.24.4.155:22-172.24.4.1:59230.service. Feb 8 23:30:57.074963 sshd[3228]: Accepted publickey for core from 172.24.4.1 port 59230 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:30:57.080489 sshd[3228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:30:57.092348 systemd-logind[1059]: New session 6 of user core. Feb 8 23:30:57.096360 systemd[1]: Started session-6.scope. Feb 8 23:30:57.928208 sshd[3228]: pam_unix(sshd:session): session closed for user core Feb 8 23:30:57.934786 systemd[1]: sshd@5-172.24.4.155:22-172.24.4.1:59230.service: Deactivated successfully. Feb 8 23:30:57.936995 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:30:57.939601 systemd-logind[1059]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:30:57.943210 systemd-logind[1059]: Removed session 6. Feb 8 23:31:02.933306 systemd[1]: Started sshd@6-172.24.4.155:22-172.24.4.1:59246.service. Feb 8 23:31:04.150253 sshd[3240]: Accepted publickey for core from 172.24.4.1 port 59246 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:04.153825 sshd[3240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:04.164470 systemd-logind[1059]: New session 7 of user core. Feb 8 23:31:04.165483 systemd[1]: Started session-7.scope. Feb 8 23:31:04.922516 sshd[3240]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:04.927665 systemd[1]: sshd@6-172.24.4.155:22-172.24.4.1:59246.service: Deactivated successfully. Feb 8 23:31:04.929479 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:31:04.930896 systemd-logind[1059]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:31:04.933306 systemd-logind[1059]: Removed session 7. Feb 8 23:31:09.934653 systemd[1]: Started sshd@7-172.24.4.155:22-172.24.4.1:44006.service. Feb 8 23:31:11.100819 sshd[3255]: Accepted publickey for core from 172.24.4.1 port 44006 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:11.103487 sshd[3255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:11.120382 systemd-logind[1059]: New session 8 of user core. Feb 8 23:31:11.121877 systemd[1]: Started session-8.scope. Feb 8 23:31:12.027980 sshd[3255]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:12.034029 systemd[1]: sshd@7-172.24.4.155:22-172.24.4.1:44006.service: Deactivated successfully. Feb 8 23:31:12.035881 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:31:12.037277 systemd-logind[1059]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:31:12.040053 systemd-logind[1059]: Removed session 8. Feb 8 23:31:17.038134 systemd[1]: Started sshd@8-172.24.4.155:22-172.24.4.1:60346.service. Feb 8 23:31:18.420157 sshd[3269]: Accepted publickey for core from 172.24.4.1 port 60346 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:18.423809 sshd[3269]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:18.440802 systemd-logind[1059]: New session 9 of user core. Feb 8 23:31:18.440854 systemd[1]: Started session-9.scope. Feb 8 23:31:19.218259 sshd[3269]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:19.229742 systemd[1]: Started sshd@9-172.24.4.155:22-172.24.4.1:60360.service. Feb 8 23:31:19.231117 systemd[1]: sshd@8-172.24.4.155:22-172.24.4.1:60346.service: Deactivated successfully. Feb 8 23:31:19.233342 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:31:19.236282 systemd-logind[1059]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:31:19.239131 systemd-logind[1059]: Removed session 9. Feb 8 23:31:20.687300 sshd[3283]: Accepted publickey for core from 172.24.4.1 port 60360 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:20.693350 sshd[3283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:20.706418 systemd-logind[1059]: New session 10 of user core. Feb 8 23:31:20.710545 systemd[1]: Started session-10.scope. Feb 8 23:31:22.579596 sshd[3283]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:22.585488 systemd[1]: Started sshd@10-172.24.4.155:22-172.24.4.1:60376.service. Feb 8 23:31:22.598696 systemd[1]: sshd@9-172.24.4.155:22-172.24.4.1:60360.service: Deactivated successfully. Feb 8 23:31:22.600545 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:31:22.605506 systemd-logind[1059]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:31:22.607718 systemd-logind[1059]: Removed session 10. Feb 8 23:31:23.875782 sshd[3295]: Accepted publickey for core from 172.24.4.1 port 60376 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:23.878452 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:23.888347 systemd-logind[1059]: New session 11 of user core. Feb 8 23:31:23.890195 systemd[1]: Started session-11.scope. Feb 8 23:31:24.543859 sshd[3295]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:24.549031 systemd[1]: sshd@10-172.24.4.155:22-172.24.4.1:60376.service: Deactivated successfully. Feb 8 23:31:24.550763 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:31:24.552126 systemd-logind[1059]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:31:24.553960 systemd-logind[1059]: Removed session 11. Feb 8 23:31:29.552965 systemd[1]: Started sshd@11-172.24.4.155:22-172.24.4.1:60004.service. Feb 8 23:31:30.816571 sshd[3308]: Accepted publickey for core from 172.24.4.1 port 60004 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:30.819943 sshd[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:30.835276 systemd-logind[1059]: New session 12 of user core. Feb 8 23:31:30.837191 systemd[1]: Started session-12.scope. Feb 8 23:31:31.729778 sshd[3308]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:31.735556 systemd[1]: sshd@11-172.24.4.155:22-172.24.4.1:60004.service: Deactivated successfully. Feb 8 23:31:31.737333 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:31:31.738875 systemd-logind[1059]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:31:31.740938 systemd-logind[1059]: Removed session 12. Feb 8 23:31:36.740574 systemd[1]: Started sshd@12-172.24.4.155:22-172.24.4.1:52016.service. Feb 8 23:31:37.861668 sshd[3321]: Accepted publickey for core from 172.24.4.1 port 52016 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:37.864601 sshd[3321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:37.878028 systemd-logind[1059]: New session 13 of user core. Feb 8 23:31:37.879102 systemd[1]: Started session-13.scope. Feb 8 23:31:38.640944 sshd[3321]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:38.645515 systemd[1]: sshd@12-172.24.4.155:22-172.24.4.1:52016.service: Deactivated successfully. Feb 8 23:31:38.647272 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:31:38.648766 systemd-logind[1059]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:31:38.651961 systemd-logind[1059]: Removed session 13. Feb 8 23:31:43.645716 systemd[1]: Started sshd@13-172.24.4.155:22-172.24.4.1:52018.service. Feb 8 23:31:44.968093 sshd[3333]: Accepted publickey for core from 172.24.4.1 port 52018 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:44.973208 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:44.986816 systemd[1]: Started session-14.scope. Feb 8 23:31:44.987736 systemd-logind[1059]: New session 14 of user core. Feb 8 23:31:45.714383 sshd[3333]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:45.722900 systemd[1]: Started sshd@14-172.24.4.155:22-172.24.4.1:53734.service. Feb 8 23:31:45.724183 systemd[1]: sshd@13-172.24.4.155:22-172.24.4.1:52018.service: Deactivated successfully. Feb 8 23:31:45.726075 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:31:45.733901 systemd-logind[1059]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:31:45.741732 systemd-logind[1059]: Removed session 14. Feb 8 23:31:46.925515 sshd[3346]: Accepted publickey for core from 172.24.4.1 port 53734 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:46.929200 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:46.940301 systemd-logind[1059]: New session 15 of user core. Feb 8 23:31:46.941257 systemd[1]: Started session-15.scope. Feb 8 23:31:48.354955 sshd[3346]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:48.363371 systemd[1]: sshd@14-172.24.4.155:22-172.24.4.1:53734.service: Deactivated successfully. Feb 8 23:31:48.365552 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:31:48.368309 systemd-logind[1059]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:31:48.372888 systemd[1]: Started sshd@15-172.24.4.155:22-172.24.4.1:53746.service. Feb 8 23:31:48.378735 systemd-logind[1059]: Removed session 15. Feb 8 23:31:49.752865 sshd[3356]: Accepted publickey for core from 172.24.4.1 port 53746 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:49.755117 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:49.767999 systemd-logind[1059]: New session 16 of user core. Feb 8 23:31:49.769526 systemd[1]: Started session-16.scope. Feb 8 23:31:51.945934 sshd[3356]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:51.962748 systemd[1]: Started sshd@16-172.24.4.155:22-172.24.4.1:53752.service. Feb 8 23:31:51.968345 systemd[1]: sshd@15-172.24.4.155:22-172.24.4.1:53746.service: Deactivated successfully. Feb 8 23:31:51.970077 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:31:51.972097 systemd-logind[1059]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:31:51.975329 systemd-logind[1059]: Removed session 16. Feb 8 23:31:53.039993 sshd[3374]: Accepted publickey for core from 172.24.4.1 port 53752 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:53.042382 sshd[3374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:53.052273 systemd-logind[1059]: New session 17 of user core. Feb 8 23:31:53.053572 systemd[1]: Started session-17.scope. Feb 8 23:31:54.763161 sshd[3374]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:54.772856 systemd[1]: Started sshd@17-172.24.4.155:22-172.24.4.1:57956.service. Feb 8 23:31:54.776154 systemd[1]: sshd@16-172.24.4.155:22-172.24.4.1:53752.service: Deactivated successfully. Feb 8 23:31:54.778063 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:31:54.782744 systemd-logind[1059]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:31:54.786009 systemd-logind[1059]: Removed session 17. Feb 8 23:31:56.521377 sshd[3383]: Accepted publickey for core from 172.24.4.1 port 57956 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:31:56.523997 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:31:56.544496 systemd-logind[1059]: New session 18 of user core. Feb 8 23:31:56.545564 systemd[1]: Started session-18.scope. Feb 8 23:31:57.399093 sshd[3383]: pam_unix(sshd:session): session closed for user core Feb 8 23:31:57.405768 systemd[1]: sshd@17-172.24.4.155:22-172.24.4.1:57956.service: Deactivated successfully. Feb 8 23:31:57.407514 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:31:57.409036 systemd-logind[1059]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:31:57.411528 systemd-logind[1059]: Removed session 18. Feb 8 23:32:02.412925 systemd[1]: Started sshd@18-172.24.4.155:22-172.24.4.1:57964.service. Feb 8 23:32:03.728308 sshd[3399]: Accepted publickey for core from 172.24.4.1 port 57964 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:03.732366 sshd[3399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:03.743347 systemd-logind[1059]: New session 19 of user core. Feb 8 23:32:03.745645 systemd[1]: Started session-19.scope. Feb 8 23:32:04.439652 sshd[3399]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:04.443863 systemd[1]: sshd@18-172.24.4.155:22-172.24.4.1:57964.service: Deactivated successfully. Feb 8 23:32:04.445520 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:32:04.446453 systemd-logind[1059]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:32:04.449485 systemd-logind[1059]: Removed session 19. Feb 8 23:32:09.445583 systemd[1]: Started sshd@19-172.24.4.155:22-172.24.4.1:34900.service. Feb 8 23:32:10.966935 sshd[3413]: Accepted publickey for core from 172.24.4.1 port 34900 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:10.969793 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:10.982063 systemd-logind[1059]: New session 20 of user core. Feb 8 23:32:10.983175 systemd[1]: Started session-20.scope. Feb 8 23:32:11.707827 sshd[3413]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:11.714509 systemd[1]: sshd@19-172.24.4.155:22-172.24.4.1:34900.service: Deactivated successfully. Feb 8 23:32:11.716433 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:32:11.717911 systemd-logind[1059]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:32:11.721323 systemd-logind[1059]: Removed session 20. Feb 8 23:32:16.721455 systemd[1]: Started sshd@20-172.24.4.155:22-172.24.4.1:41238.service. Feb 8 23:32:18.230770 sshd[3425]: Accepted publickey for core from 172.24.4.1 port 41238 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:18.233618 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:18.244758 systemd-logind[1059]: New session 21 of user core. Feb 8 23:32:18.245704 systemd[1]: Started session-21.scope. Feb 8 23:32:18.970635 sshd[3425]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:18.978810 systemd[1]: Started sshd@21-172.24.4.155:22-172.24.4.1:41248.service. Feb 8 23:32:18.980142 systemd[1]: sshd@20-172.24.4.155:22-172.24.4.1:41238.service: Deactivated successfully. Feb 8 23:32:18.983483 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:32:18.986407 systemd-logind[1059]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:32:18.989952 systemd-logind[1059]: Removed session 21. Feb 8 23:32:20.367254 sshd[3436]: Accepted publickey for core from 172.24.4.1 port 41248 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:20.370386 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:20.384417 systemd-logind[1059]: New session 22 of user core. Feb 8 23:32:20.384505 systemd[1]: Started session-22.scope. Feb 8 23:32:22.908355 systemd[1]: run-containerd-runc-k8s.io-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969-runc.9bloZh.mount: Deactivated successfully. Feb 8 23:32:22.935748 env[1068]: time="2024-02-08T23:32:22.935688057Z" level=info msg="StopContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" with timeout 30 (s)" Feb 8 23:32:22.936784 env[1068]: time="2024-02-08T23:32:22.936433349Z" level=info msg="Stop container \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" with signal terminated" Feb 8 23:32:22.953547 systemd[1]: cri-containerd-fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855.scope: Deactivated successfully. Feb 8 23:32:22.962791 env[1068]: time="2024-02-08T23:32:22.962661468Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:32:22.971839 env[1068]: time="2024-02-08T23:32:22.971800032Z" level=info msg="StopContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" with timeout 1 (s)" Feb 8 23:32:22.973256 env[1068]: time="2024-02-08T23:32:22.973196714Z" level=info msg="Stop container \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" with signal terminated" Feb 8 23:32:22.978667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855-rootfs.mount: Deactivated successfully. Feb 8 23:32:22.993432 systemd-networkd[976]: lxc_health: Link DOWN Feb 8 23:32:22.993924 systemd-networkd[976]: lxc_health: Lost carrier Feb 8 23:32:22.996773 env[1068]: time="2024-02-08T23:32:22.996713570Z" level=info msg="shim disconnected" id=fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855 Feb 8 23:32:22.996989 env[1068]: time="2024-02-08T23:32:22.996968778Z" level=warning msg="cleaning up after shim disconnected" id=fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855 namespace=k8s.io Feb 8 23:32:22.997061 env[1068]: time="2024-02-08T23:32:22.997047401Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.012246 env[1068]: time="2024-02-08T23:32:23.010648254Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3490 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.014739 env[1068]: time="2024-02-08T23:32:23.014708868Z" level=info msg="StopContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" returns successfully" Feb 8 23:32:23.026643 env[1068]: time="2024-02-08T23:32:23.026591469Z" level=info msg="StopPodSandbox for \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\"" Feb 8 23:32:23.026907 env[1068]: time="2024-02-08T23:32:23.026885542Z" level=info msg="Container to stop \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.029181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa-shm.mount: Deactivated successfully. Feb 8 23:32:23.035480 systemd[1]: cri-containerd-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969.scope: Deactivated successfully. Feb 8 23:32:23.035782 systemd[1]: cri-containerd-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969.scope: Consumed 9.303s CPU time. Feb 8 23:32:23.045518 systemd[1]: cri-containerd-de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa.scope: Deactivated successfully. Feb 8 23:32:23.080587 env[1068]: time="2024-02-08T23:32:23.080498387Z" level=info msg="shim disconnected" id=e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969 Feb 8 23:32:23.080861 env[1068]: time="2024-02-08T23:32:23.080840141Z" level=warning msg="cleaning up after shim disconnected" id=e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969 namespace=k8s.io Feb 8 23:32:23.080947 env[1068]: time="2024-02-08T23:32:23.080930448Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.085462 env[1068]: time="2024-02-08T23:32:23.085393926Z" level=info msg="shim disconnected" id=de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa Feb 8 23:32:23.085570 env[1068]: time="2024-02-08T23:32:23.085462119Z" level=warning msg="cleaning up after shim disconnected" id=de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa namespace=k8s.io Feb 8 23:32:23.085570 env[1068]: time="2024-02-08T23:32:23.085475305Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.096338 env[1068]: time="2024-02-08T23:32:23.096278103Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3538 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.098274 env[1068]: time="2024-02-08T23:32:23.098189044Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3546 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.098790 env[1068]: time="2024-02-08T23:32:23.098757490Z" level=info msg="TearDown network for sandbox \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\" successfully" Feb 8 23:32:23.098790 env[1068]: time="2024-02-08T23:32:23.098788660Z" level=info msg="StopPodSandbox for \"de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa\" returns successfully" Feb 8 23:32:23.100506 env[1068]: time="2024-02-08T23:32:23.100470424Z" level=info msg="StopContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" returns successfully" Feb 8 23:32:23.101632 env[1068]: time="2024-02-08T23:32:23.101608690Z" level=info msg="StopPodSandbox for \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\"" Feb 8 23:32:23.101793 env[1068]: time="2024-02-08T23:32:23.101753312Z" level=info msg="Container to stop \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.101884 env[1068]: time="2024-02-08T23:32:23.101862714Z" level=info msg="Container to stop \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.101979 env[1068]: time="2024-02-08T23:32:23.101959944Z" level=info msg="Container to stop \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.102660 env[1068]: time="2024-02-08T23:32:23.102529673Z" level=info msg="Container to stop \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.102761 env[1068]: time="2024-02-08T23:32:23.102740413Z" level=info msg="Container to stop \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:23.118842 systemd[1]: cri-containerd-01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07.scope: Deactivated successfully. Feb 8 23:32:23.150800 kubelet[1908]: I0208 23:32:23.150743 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndxq5\" (UniqueName: \"kubernetes.io/projected/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-kube-api-access-ndxq5\") pod \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\" (UID: \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\") " Feb 8 23:32:23.151630 kubelet[1908]: I0208 23:32:23.150834 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-cilium-config-path\") pod \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\" (UID: \"bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3\") " Feb 8 23:32:23.163664 kubelet[1908]: W0208 23:32:23.158308 1908 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:32:23.178187 kubelet[1908]: I0208 23:32:23.174303 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3" (UID: "bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:32:23.182350 env[1068]: time="2024-02-08T23:32:23.182285449Z" level=info msg="shim disconnected" id=01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07 Feb 8 23:32:23.182542 env[1068]: time="2024-02-08T23:32:23.182520216Z" level=warning msg="cleaning up after shim disconnected" id=01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07 namespace=k8s.io Feb 8 23:32:23.182633 env[1068]: time="2024-02-08T23:32:23.182618126Z" level=info msg="cleaning up dead shim" Feb 8 23:32:23.186342 kubelet[1908]: I0208 23:32:23.186297 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-kube-api-access-ndxq5" (OuterVolumeSpecName: "kube-api-access-ndxq5") pod "bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3" (UID: "bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3"). InnerVolumeSpecName "kube-api-access-ndxq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:23.196595 env[1068]: time="2024-02-08T23:32:23.196530298Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3585 runtime=io.containerd.runc.v2\n" Feb 8 23:32:23.196995 env[1068]: time="2024-02-08T23:32:23.196946158Z" level=info msg="TearDown network for sandbox \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" successfully" Feb 8 23:32:23.196995 env[1068]: time="2024-02-08T23:32:23.196979923Z" level=info msg="StopPodSandbox for \"01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07\" returns successfully" Feb 8 23:32:23.251641 kubelet[1908]: I0208 23:32:23.251581 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-etc-cni-netd\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251838 kubelet[1908]: I0208 23:32:23.251678 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-config-path\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251838 kubelet[1908]: I0208 23:32:23.251726 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-xtables-lock\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251838 kubelet[1908]: I0208 23:32:23.251770 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-lib-modules\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251838 kubelet[1908]: I0208 23:32:23.251814 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-hostproc\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251970 kubelet[1908]: I0208 23:32:23.251861 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-cgroup\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.251970 kubelet[1908]: I0208 23:32:23.251909 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-kernel\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252029 kubelet[1908]: I0208 23:32:23.251955 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cni-path\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252075 kubelet[1908]: I0208 23:32:23.252036 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-hubble-tls\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252128 kubelet[1908]: I0208 23:32:23.252089 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-net\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252185 kubelet[1908]: I0208 23:32:23.252149 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnmpb\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-kube-api-access-lnmpb\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252290 kubelet[1908]: I0208 23:32:23.252198 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-run\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.252418 kubelet[1908]: I0208 23:32:23.252348 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.252524 kubelet[1908]: I0208 23:32:23.252432 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.252812 kubelet[1908]: W0208 23:32:23.252746 1908 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/dd847ee7-2204-46a5-b620-dc3df38a981b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:32:23.253460 kubelet[1908]: I0208 23:32:23.253053 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-bpf-maps\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.253460 kubelet[1908]: I0208 23:32:23.253179 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd847ee7-2204-46a5-b620-dc3df38a981b-clustermesh-secrets\") pod \"dd847ee7-2204-46a5-b620-dc3df38a981b\" (UID: \"dd847ee7-2204-46a5-b620-dc3df38a981b\") " Feb 8 23:32:23.257267 kubelet[1908]: I0208 23:32:23.254187 1908 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-etc-cni-netd\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.257267 kubelet[1908]: I0208 23:32:23.254270 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ndxq5\" (UniqueName: \"kubernetes.io/projected/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-kube-api-access-ndxq5\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.257267 kubelet[1908]: I0208 23:32:23.254301 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-cgroup\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.257267 kubelet[1908]: I0208 23:32:23.254326 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3-cilium-config-path\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.257853 kubelet[1908]: I0208 23:32:23.257800 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.258029 kubelet[1908]: I0208 23:32:23.257997 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cni-path" (OuterVolumeSpecName: "cni-path") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.259668 kubelet[1908]: I0208 23:32:23.259612 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:32:23.259787 kubelet[1908]: I0208 23:32:23.259696 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.259856 kubelet[1908]: I0208 23:32:23.259790 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.259856 kubelet[1908]: I0208 23:32:23.259837 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-hostproc" (OuterVolumeSpecName: "hostproc") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.260786 kubelet[1908]: I0208 23:32:23.260732 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.260961 kubelet[1908]: I0208 23:32:23.260932 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.261118 kubelet[1908]: I0208 23:32:23.261091 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:23.261935 kubelet[1908]: I0208 23:32:23.261902 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd847ee7-2204-46a5-b620-dc3df38a981b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:23.265809 kubelet[1908]: I0208 23:32:23.265771 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:23.267861 kubelet[1908]: I0208 23:32:23.267780 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-kube-api-access-lnmpb" (OuterVolumeSpecName: "kube-api-access-lnmpb") pod "dd847ee7-2204-46a5-b620-dc3df38a981b" (UID: "dd847ee7-2204-46a5-b620-dc3df38a981b"). InnerVolumeSpecName "kube-api-access-lnmpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:23.355522 kubelet[1908]: I0208 23:32:23.355461 1908 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cni-path\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.355895 kubelet[1908]: I0208 23:32:23.355869 1908 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-hubble-tls\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.356123 kubelet[1908]: I0208 23:32:23.356097 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-net\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.356355 kubelet[1908]: I0208 23:32:23.356326 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lnmpb\" (UniqueName: \"kubernetes.io/projected/dd847ee7-2204-46a5-b620-dc3df38a981b-kube-api-access-lnmpb\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.356543 kubelet[1908]: I0208 23:32:23.356520 1908 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd847ee7-2204-46a5-b620-dc3df38a981b-clustermesh-secrets\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.356715 kubelet[1908]: I0208 23:32:23.356692 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-run\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.356880 kubelet[1908]: I0208 23:32:23.356860 1908 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-bpf-maps\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.357082 kubelet[1908]: I0208 23:32:23.357057 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd847ee7-2204-46a5-b620-dc3df38a981b-cilium-config-path\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.357293 kubelet[1908]: I0208 23:32:23.357267 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-host-proc-sys-kernel\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.357477 kubelet[1908]: I0208 23:32:23.357453 1908 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-xtables-lock\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.357681 kubelet[1908]: I0208 23:32:23.357656 1908 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-lib-modules\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.357853 kubelet[1908]: I0208 23:32:23.357832 1908 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd847ee7-2204-46a5-b620-dc3df38a981b-hostproc\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:23.902736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969-rootfs.mount: Deactivated successfully. Feb 8 23:32:23.903471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de50f18b8bdbe4f38f8136cd2cd161351b74eb6a1943aaaabda88633c8f3f7fa-rootfs.mount: Deactivated successfully. Feb 8 23:32:23.903885 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07-rootfs.mount: Deactivated successfully. Feb 8 23:32:23.904075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01e5782510358385ad4d7702b9d9469947dba0d161e1d7e7558f8eb7d3cede07-shm.mount: Deactivated successfully. Feb 8 23:32:23.904732 systemd[1]: var-lib-kubelet-pods-dd847ee7\x2d2204\x2d46a5\x2db620\x2ddc3df38a981b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlnmpb.mount: Deactivated successfully. Feb 8 23:32:23.905109 systemd[1]: var-lib-kubelet-pods-bf45b0f7\x2d7c3a\x2d4e1e\x2db509\x2def5ba4bb83a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dndxq5.mount: Deactivated successfully. Feb 8 23:32:23.905590 systemd[1]: var-lib-kubelet-pods-dd847ee7\x2d2204\x2d46a5\x2db620\x2ddc3df38a981b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:32:23.906138 systemd[1]: var-lib-kubelet-pods-dd847ee7\x2d2204\x2d46a5\x2db620\x2ddc3df38a981b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:24.060579 kubelet[1908]: I0208 23:32:24.060529 1908 scope.go:115] "RemoveContainer" containerID="fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855" Feb 8 23:32:24.070646 env[1068]: time="2024-02-08T23:32:24.070525647Z" level=info msg="RemoveContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\"" Feb 8 23:32:24.078511 systemd[1]: Removed slice kubepods-besteffort-podbf45b0f7_7c3a_4e1e_b509_ef5ba4bb83a3.slice. Feb 8 23:32:24.112848 env[1068]: time="2024-02-08T23:32:24.112681672Z" level=info msg="RemoveContainer for \"fa653a0ef9603b5c4f5a2893280ca5307dcf351d2a220a4efdb1811bf47f1855\" returns successfully" Feb 8 23:32:24.116788 kubelet[1908]: I0208 23:32:24.116746 1908 scope.go:115] "RemoveContainer" containerID="e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969" Feb 8 23:32:24.122622 env[1068]: time="2024-02-08T23:32:24.122531480Z" level=info msg="RemoveContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\"" Feb 8 23:32:24.123949 systemd[1]: Removed slice kubepods-burstable-poddd847ee7_2204_46a5_b620_dc3df38a981b.slice. Feb 8 23:32:24.124158 systemd[1]: kubepods-burstable-poddd847ee7_2204_46a5_b620_dc3df38a981b.slice: Consumed 9.435s CPU time. Feb 8 23:32:24.126527 env[1068]: time="2024-02-08T23:32:24.126485621Z" level=info msg="RemoveContainer for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" returns successfully" Feb 8 23:32:24.128804 kubelet[1908]: I0208 23:32:24.128786 1908 scope.go:115] "RemoveContainer" containerID="8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178" Feb 8 23:32:24.130716 env[1068]: time="2024-02-08T23:32:24.130673958Z" level=info msg="RemoveContainer for \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\"" Feb 8 23:32:24.139852 env[1068]: time="2024-02-08T23:32:24.139783386Z" level=info msg="RemoveContainer for \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\" returns successfully" Feb 8 23:32:24.140125 kubelet[1908]: I0208 23:32:24.140104 1908 scope.go:115] "RemoveContainer" containerID="c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46" Feb 8 23:32:24.141946 env[1068]: time="2024-02-08T23:32:24.141811721Z" level=info msg="RemoveContainer for \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\"" Feb 8 23:32:24.146040 env[1068]: time="2024-02-08T23:32:24.145997633Z" level=info msg="RemoveContainer for \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\" returns successfully" Feb 8 23:32:24.146434 kubelet[1908]: I0208 23:32:24.146413 1908 scope.go:115] "RemoveContainer" containerID="170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff" Feb 8 23:32:24.147968 env[1068]: time="2024-02-08T23:32:24.147942686Z" level=info msg="RemoveContainer for \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\"" Feb 8 23:32:24.151295 env[1068]: time="2024-02-08T23:32:24.151270318Z" level=info msg="RemoveContainer for \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\" returns successfully" Feb 8 23:32:24.151571 kubelet[1908]: I0208 23:32:24.151558 1908 scope.go:115] "RemoveContainer" containerID="bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7" Feb 8 23:32:24.153421 env[1068]: time="2024-02-08T23:32:24.153342108Z" level=info msg="RemoveContainer for \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\"" Feb 8 23:32:24.156947 env[1068]: time="2024-02-08T23:32:24.156919627Z" level=info msg="RemoveContainer for \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\" returns successfully" Feb 8 23:32:24.157236 kubelet[1908]: I0208 23:32:24.157207 1908 scope.go:115] "RemoveContainer" containerID="e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969" Feb 8 23:32:24.157647 env[1068]: time="2024-02-08T23:32:24.157537398Z" level=error msg="ContainerStatus for \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\": not found" Feb 8 23:32:24.159440 kubelet[1908]: E0208 23:32:24.159414 1908 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\": not found" containerID="e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969" Feb 8 23:32:24.160736 kubelet[1908]: I0208 23:32:24.160716 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969} err="failed to get container status \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\": rpc error: code = NotFound desc = an error occurred when try to find container \"e67e410872b1a23a14a736c4a58e071c285bcca47322990c35327cf26fa42969\": not found" Feb 8 23:32:24.160806 kubelet[1908]: I0208 23:32:24.160747 1908 scope.go:115] "RemoveContainer" containerID="8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178" Feb 8 23:32:24.161081 env[1068]: time="2024-02-08T23:32:24.160998250Z" level=error msg="ContainerStatus for \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\": not found" Feb 8 23:32:24.161341 kubelet[1908]: E0208 23:32:24.161300 1908 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\": not found" containerID="8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178" Feb 8 23:32:24.161487 kubelet[1908]: I0208 23:32:24.161476 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178} err="failed to get container status \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f1a5f984dded3e07349eca8b92945dee4bd12d35876d369f26f82e25a73c178\": not found" Feb 8 23:32:24.161581 kubelet[1908]: I0208 23:32:24.161570 1908 scope.go:115] "RemoveContainer" containerID="c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46" Feb 8 23:32:24.161884 env[1068]: time="2024-02-08T23:32:24.161837201Z" level=error msg="ContainerStatus for \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\": not found" Feb 8 23:32:24.162052 kubelet[1908]: E0208 23:32:24.162028 1908 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\": not found" containerID="c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46" Feb 8 23:32:24.162145 kubelet[1908]: I0208 23:32:24.162134 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46} err="failed to get container status \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\": rpc error: code = NotFound desc = an error occurred when try to find container \"c90c6c8f39af9beba1f4da2b7b9f9a5e5bb1d9cf49c303b0f51566df3b1d4f46\": not found" Feb 8 23:32:24.162239 kubelet[1908]: I0208 23:32:24.162228 1908 scope.go:115] "RemoveContainer" containerID="170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff" Feb 8 23:32:24.162517 env[1068]: time="2024-02-08T23:32:24.162473690Z" level=error msg="ContainerStatus for \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\": not found" Feb 8 23:32:24.162673 kubelet[1908]: E0208 23:32:24.162651 1908 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\": not found" containerID="170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff" Feb 8 23:32:24.162772 kubelet[1908]: I0208 23:32:24.162761 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff} err="failed to get container status \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"170541a3e33dfb2e91db64ba4e5e33c9c05d2862701ec5586afd340300d7c4ff\": not found" Feb 8 23:32:24.162866 kubelet[1908]: I0208 23:32:24.162856 1908 scope.go:115] "RemoveContainer" containerID="bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7" Feb 8 23:32:24.163143 env[1068]: time="2024-02-08T23:32:24.163099246Z" level=error msg="ContainerStatus for \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\": not found" Feb 8 23:32:24.163345 kubelet[1908]: E0208 23:32:24.163322 1908 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\": not found" containerID="bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7" Feb 8 23:32:24.163463 kubelet[1908]: I0208 23:32:24.163454 1908 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7} err="failed to get container status \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfb9b7ac1b1a9d9c5b5a136c6dfb72588a2415c870050019c5454e451046fce7\": not found" Feb 8 23:32:24.439677 kubelet[1908]: I0208 23:32:24.439448 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3 path="/var/lib/kubelet/pods/bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3/volumes" Feb 8 23:32:24.442090 kubelet[1908]: I0208 23:32:24.442043 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=dd847ee7-2204-46a5-b620-dc3df38a981b path="/var/lib/kubelet/pods/dd847ee7-2204-46a5-b620-dc3df38a981b/volumes" Feb 8 23:32:24.612209 kubelet[1908]: E0208 23:32:24.612130 1908 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:24.999213 sshd[3436]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:25.006058 systemd[1]: Started sshd@22-172.24.4.155:22-172.24.4.1:44826.service. Feb 8 23:32:25.009198 systemd[1]: sshd@21-172.24.4.155:22-172.24.4.1:41248.service: Deactivated successfully. Feb 8 23:32:25.012779 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:32:25.013688 systemd[1]: session-22.scope: Consumed 1.282s CPU time. Feb 8 23:32:25.016014 systemd-logind[1059]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:32:25.024631 systemd-logind[1059]: Removed session 22. Feb 8 23:32:26.262134 sshd[3603]: Accepted publickey for core from 172.24.4.1 port 44826 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:26.265078 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:26.279151 systemd[1]: Started session-23.scope. Feb 8 23:32:26.281073 systemd-logind[1059]: New session 23 of user core. Feb 8 23:32:28.186080 kubelet[1908]: I0208 23:32:28.186034 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:28.187502 kubelet[1908]: E0208 23:32:28.187462 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="mount-bpf-fs" Feb 8 23:32:28.187502 kubelet[1908]: E0208 23:32:28.187502 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="clean-cilium-state" Feb 8 23:32:28.187625 kubelet[1908]: E0208 23:32:28.187514 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="cilium-agent" Feb 8 23:32:28.187625 kubelet[1908]: E0208 23:32:28.187525 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="mount-cgroup" Feb 8 23:32:28.187625 kubelet[1908]: E0208 23:32:28.187534 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="apply-sysctl-overwrites" Feb 8 23:32:28.187625 kubelet[1908]: E0208 23:32:28.187543 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3" containerName="cilium-operator" Feb 8 23:32:28.187625 kubelet[1908]: I0208 23:32:28.187580 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="dd847ee7-2204-46a5-b620-dc3df38a981b" containerName="cilium-agent" Feb 8 23:32:28.187625 kubelet[1908]: I0208 23:32:28.187593 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="bf45b0f7-7c3a-4e1e-b509-ef5ba4bb83a3" containerName="cilium-operator" Feb 8 23:32:28.209116 systemd[1]: Created slice kubepods-burstable-pod40d6d52f_5115_47ea_a934_817897fab282.slice. Feb 8 23:32:28.298449 kubelet[1908]: I0208 23:32:28.298382 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-etc-cni-netd\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.299711 kubelet[1908]: I0208 23:32:28.299686 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-kernel\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.299961 kubelet[1908]: I0208 23:32:28.299940 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-cgroup\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.300165 kubelet[1908]: I0208 23:32:28.300146 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-cilium-ipsec-secrets\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.300399 kubelet[1908]: I0208 23:32:28.300376 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-run\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.300627 kubelet[1908]: I0208 23:32:28.300605 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-lib-modules\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.300823 kubelet[1908]: I0208 23:32:28.300802 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-hostproc\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301066 kubelet[1908]: I0208 23:32:28.301019 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cni-path\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301159 kubelet[1908]: I0208 23:32:28.301111 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-net\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301252 kubelet[1908]: I0208 23:32:28.301170 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-hubble-tls\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301322 kubelet[1908]: I0208 23:32:28.301258 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-bpf-maps\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301379 kubelet[1908]: I0208 23:32:28.301322 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m84cx\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-kube-api-access-m84cx\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301433 kubelet[1908]: I0208 23:32:28.301377 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-xtables-lock\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301486 kubelet[1908]: I0208 23:32:28.301441 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d6d52f-5115-47ea-a934-817897fab282-cilium-config-path\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.301534 kubelet[1908]: I0208 23:32:28.301497 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-clustermesh-secrets\") pod \"cilium-jkghc\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " pod="kube-system/cilium-jkghc" Feb 8 23:32:28.329685 sshd[3603]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:28.338809 systemd-logind[1059]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:32:28.338881 systemd[1]: sshd@22-172.24.4.155:22-172.24.4.1:44826.service: Deactivated successfully. Feb 8 23:32:28.340885 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:32:28.341684 systemd[1]: session-23.scope: Consumed 1.398s CPU time. Feb 8 23:32:28.346904 systemd[1]: Started sshd@23-172.24.4.155:22-172.24.4.1:44840.service. Feb 8 23:32:28.353794 systemd-logind[1059]: Removed session 23. Feb 8 23:32:28.394290 kubelet[1908]: I0208 23:32:28.394252 1908 setters.go:548] "Node became not ready" node="ci-3510-3-2-9-f62ee4a992.novalocal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:32:28.394124167 +0000 UTC m=+144.263958292 LastTransitionTime:2024-02-08 23:32:28.394124167 +0000 UTC m=+144.263958292 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:32:28.513360 env[1068]: time="2024-02-08T23:32:28.513042117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jkghc,Uid:40d6d52f-5115-47ea-a934-817897fab282,Namespace:kube-system,Attempt:0,}" Feb 8 23:32:28.551778 env[1068]: time="2024-02-08T23:32:28.551638631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:28.552421 env[1068]: time="2024-02-08T23:32:28.552188517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:28.552421 env[1068]: time="2024-02-08T23:32:28.552284032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:28.552879 env[1068]: time="2024-02-08T23:32:28.552795895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2 pid=3628 runtime=io.containerd.runc.v2 Feb 8 23:32:28.585772 systemd[1]: Started cri-containerd-30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2.scope. Feb 8 23:32:28.625614 env[1068]: time="2024-02-08T23:32:28.625561621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jkghc,Uid:40d6d52f-5115-47ea-a934-817897fab282,Namespace:kube-system,Attempt:0,} returns sandbox id \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\"" Feb 8 23:32:28.630201 env[1068]: time="2024-02-08T23:32:28.630160477Z" level=info msg="CreateContainer within sandbox \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:32:28.651630 env[1068]: time="2024-02-08T23:32:28.651572403Z" level=info msg="CreateContainer within sandbox \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" Feb 8 23:32:28.653192 env[1068]: time="2024-02-08T23:32:28.653108270Z" level=info msg="StartContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" Feb 8 23:32:28.679288 systemd[1]: Started cri-containerd-4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b.scope. Feb 8 23:32:28.699585 systemd[1]: cri-containerd-4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b.scope: Deactivated successfully. Feb 8 23:32:28.720841 env[1068]: time="2024-02-08T23:32:28.720756203Z" level=info msg="shim disconnected" id=4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b Feb 8 23:32:28.721171 env[1068]: time="2024-02-08T23:32:28.721144596Z" level=warning msg="cleaning up after shim disconnected" id=4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b namespace=k8s.io Feb 8 23:32:28.721307 env[1068]: time="2024-02-08T23:32:28.721286922Z" level=info msg="cleaning up dead shim" Feb 8 23:32:28.732501 env[1068]: time="2024-02-08T23:32:28.732400029Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:32:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:32:28.733352 env[1068]: time="2024-02-08T23:32:28.733143791Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 8 23:32:28.735870 env[1068]: time="2024-02-08T23:32:28.735795833Z" level=error msg="Failed to pipe stderr of container \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" error="reading from a closed fifo" Feb 8 23:32:28.735870 env[1068]: time="2024-02-08T23:32:28.733548857Z" level=error msg="Failed to pipe stdout of container \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" error="reading from a closed fifo" Feb 8 23:32:28.739382 env[1068]: time="2024-02-08T23:32:28.739329635Z" level=error msg="StartContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:32:28.739903 kubelet[1908]: E0208 23:32:28.739753 1908 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" Feb 8 23:32:28.741384 kubelet[1908]: E0208 23:32:28.741186 1908 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:32:28.741384 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:32:28.741384 kubelet[1908]: rm /hostbin/cilium-mount Feb 8 23:32:28.741573 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m84cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jkghc_kube-system(40d6d52f-5115-47ea-a934-817897fab282): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:32:28.741573 kubelet[1908]: E0208 23:32:28.741328 1908 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jkghc" podUID=40d6d52f-5115-47ea-a934-817897fab282 Feb 8 23:32:29.144953 env[1068]: time="2024-02-08T23:32:29.144841999Z" level=info msg="CreateContainer within sandbox \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:32:29.167984 env[1068]: time="2024-02-08T23:32:29.167876076Z" level=info msg="CreateContainer within sandbox \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\"" Feb 8 23:32:29.173680 env[1068]: time="2024-02-08T23:32:29.173609875Z" level=info msg="StartContainer for \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\"" Feb 8 23:32:29.221609 systemd[1]: Started cri-containerd-d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429.scope. Feb 8 23:32:29.240042 systemd[1]: cri-containerd-d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429.scope: Deactivated successfully. Feb 8 23:32:29.255971 env[1068]: time="2024-02-08T23:32:29.255908852Z" level=info msg="shim disconnected" id=d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429 Feb 8 23:32:29.255971 env[1068]: time="2024-02-08T23:32:29.255969400Z" level=warning msg="cleaning up after shim disconnected" id=d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429 namespace=k8s.io Feb 8 23:32:29.255971 env[1068]: time="2024-02-08T23:32:29.255981654Z" level=info msg="cleaning up dead shim" Feb 8 23:32:29.265883 env[1068]: time="2024-02-08T23:32:29.265823361Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3725 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:32:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:32:29.266155 env[1068]: time="2024-02-08T23:32:29.266079788Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Feb 8 23:32:29.266489 env[1068]: time="2024-02-08T23:32:29.266435587Z" level=error msg="Failed to pipe stdout of container \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\"" error="reading from a closed fifo" Feb 8 23:32:29.266670 env[1068]: time="2024-02-08T23:32:29.266631837Z" level=error msg="Failed to pipe stderr of container \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\"" error="reading from a closed fifo" Feb 8 23:32:29.270292 env[1068]: time="2024-02-08T23:32:29.270210961Z" level=error msg="StartContainer for \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:32:29.271255 kubelet[1908]: E0208 23:32:29.270578 1908 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429" Feb 8 23:32:29.271255 kubelet[1908]: E0208 23:32:29.270695 1908 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:32:29.271255 kubelet[1908]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:32:29.271255 kubelet[1908]: rm /hostbin/cilium-mount Feb 8 23:32:29.271255 kubelet[1908]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m84cx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jkghc_kube-system(40d6d52f-5115-47ea-a934-817897fab282): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:32:29.271255 kubelet[1908]: E0208 23:32:29.270743 1908 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jkghc" podUID=40d6d52f-5115-47ea-a934-817897fab282 Feb 8 23:32:29.614770 kubelet[1908]: E0208 23:32:29.614665 1908 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:29.664321 sshd[3614]: Accepted publickey for core from 172.24.4.1 port 44840 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:29.667004 sshd[3614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:29.676759 systemd-logind[1059]: New session 24 of user core. Feb 8 23:32:29.680344 systemd[1]: Started session-24.scope. Feb 8 23:32:30.137523 kubelet[1908]: I0208 23:32:30.137481 1908 scope.go:115] "RemoveContainer" containerID="4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" Feb 8 23:32:30.137994 kubelet[1908]: I0208 23:32:30.137962 1908 scope.go:115] "RemoveContainer" containerID="4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" Feb 8 23:32:30.147010 env[1068]: time="2024-02-08T23:32:30.146746959Z" level=info msg="RemoveContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" Feb 8 23:32:30.155287 env[1068]: time="2024-02-08T23:32:30.152776232Z" level=info msg="RemoveContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\"" Feb 8 23:32:30.158863 env[1068]: time="2024-02-08T23:32:30.158681694Z" level=error msg="RemoveContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\" failed" error="rpc error: code = NotFound desc = get container info: container \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\" in namespace \"k8s.io\": not found" Feb 8 23:32:30.159643 env[1068]: time="2024-02-08T23:32:30.159612416Z" level=info msg="RemoveContainer for \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\" returns successfully" Feb 8 23:32:30.160015 kubelet[1908]: E0208 23:32:30.159992 1908 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b\" in namespace \"k8s.io\": not found" containerID="4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" Feb 8 23:32:30.160158 kubelet[1908]: E0208 23:32:30.160145 1908 kuberuntime_container.go:817] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" in namespace "k8s.io": not found; Skipping pod "cilium-jkghc_kube-system(40d6d52f-5115-47ea-a934-817897fab282)" Feb 8 23:32:30.160667 kubelet[1908]: E0208 23:32:30.160651 1908 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-jkghc_kube-system(40d6d52f-5115-47ea-a934-817897fab282)\"" pod="kube-system/cilium-jkghc" podUID=40d6d52f-5115-47ea-a934-817897fab282 Feb 8 23:32:30.443874 sshd[3614]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:30.450398 systemd[1]: Started sshd@24-172.24.4.155:22-172.24.4.1:44856.service. Feb 8 23:32:30.455939 systemd[1]: sshd@23-172.24.4.155:22-172.24.4.1:44840.service: Deactivated successfully. Feb 8 23:32:30.457569 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:32:30.460726 systemd-logind[1059]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:32:30.463487 systemd-logind[1059]: Removed session 24. Feb 8 23:32:31.147782 env[1068]: time="2024-02-08T23:32:31.147710886Z" level=info msg="StopPodSandbox for \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\"" Feb 8 23:32:31.151661 env[1068]: time="2024-02-08T23:32:31.151534964Z" level=info msg="Container to stop \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:32:31.155651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2-shm.mount: Deactivated successfully. Feb 8 23:32:31.179487 systemd[1]: cri-containerd-30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2.scope: Deactivated successfully. Feb 8 23:32:31.236570 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2-rootfs.mount: Deactivated successfully. Feb 8 23:32:31.250113 env[1068]: time="2024-02-08T23:32:31.250051447Z" level=info msg="shim disconnected" id=30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2 Feb 8 23:32:31.250113 env[1068]: time="2024-02-08T23:32:31.250110101Z" level=warning msg="cleaning up after shim disconnected" id=30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2 namespace=k8s.io Feb 8 23:32:31.250346 env[1068]: time="2024-02-08T23:32:31.250121574Z" level=info msg="cleaning up dead shim" Feb 8 23:32:31.258206 env[1068]: time="2024-02-08T23:32:31.258159213Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3769 runtime=io.containerd.runc.v2\n" Feb 8 23:32:31.258517 env[1068]: time="2024-02-08T23:32:31.258481977Z" level=info msg="TearDown network for sandbox \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" successfully" Feb 8 23:32:31.258559 env[1068]: time="2024-02-08T23:32:31.258512957Z" level=info msg="StopPodSandbox for \"30475538fa50cb3e39a33208ae217e13caf6e53b28fa3de11568c5f427cb13a2\" returns successfully" Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328542 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-net\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328657 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-clustermesh-secrets\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328715 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-etc-cni-netd\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328774 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-cilium-ipsec-secrets\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328847 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-kernel\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328901 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-bpf-maps\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.328984 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-cgroup\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329037 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-run\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329088 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cni-path\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329144 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-hubble-tls\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329195 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-xtables-lock\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329506 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.330665 kubelet[1908]: I0208 23:32:31.329574 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.337348 kubelet[1908]: I0208 23:32:31.334500 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.338441 kubelet[1908]: I0208 23:32:31.338360 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.338788 kubelet[1908]: I0208 23:32:31.338737 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.339142 kubelet[1908]: I0208 23:32:31.339072 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cni-path" (OuterVolumeSpecName: "cni-path") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.339986 kubelet[1908]: I0208 23:32:31.339925 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.341483 kubelet[1908]: I0208 23:32:31.340641 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.341736 kubelet[1908]: I0208 23:32:31.340917 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d6d52f-5115-47ea-a934-817897fab282-cilium-config-path\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.342211 kubelet[1908]: I0208 23:32:31.342184 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-lib-modules\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.342667 kubelet[1908]: I0208 23:32:31.342640 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-hostproc\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.342988 kubelet[1908]: I0208 23:32:31.342962 1908 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m84cx\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-kube-api-access-m84cx\") pod \"40d6d52f-5115-47ea-a934-817897fab282\" (UID: \"40d6d52f-5115-47ea-a934-817897fab282\") " Feb 8 23:32:31.343431 kubelet[1908]: I0208 23:32:31.343404 1908 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-etc-cni-netd\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.343743 kubelet[1908]: I0208 23:32:31.343679 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-kernel\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.344016 kubelet[1908]: I0208 23:32:31.343961 1908 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-bpf-maps\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.344324 kubelet[1908]: I0208 23:32:31.344214 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-cgroup\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.344656 kubelet[1908]: I0208 23:32:31.344586 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cilium-run\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.351342 kubelet[1908]: W0208 23:32:31.341197 1908 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/40d6d52f-5115-47ea-a934-817897fab282/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:32:31.351342 kubelet[1908]: I0208 23:32:31.350654 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.351342 kubelet[1908]: I0208 23:32:31.350704 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-hostproc" (OuterVolumeSpecName: "hostproc") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:32:31.353349 kubelet[1908]: I0208 23:32:31.353317 1908 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-cni-path\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.353590 kubelet[1908]: I0208 23:32:31.353537 1908 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-xtables-lock\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.353835 kubelet[1908]: I0208 23:32:31.353785 1908 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-host-proc-sys-net\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.359885 systemd[1]: var-lib-kubelet-pods-40d6d52f\x2d5115\x2d47ea\x2da934\x2d817897fab282-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:32:31.363087 kubelet[1908]: I0208 23:32:31.362986 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40d6d52f-5115-47ea-a934-817897fab282-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:32:31.363332 kubelet[1908]: I0208 23:32:31.363205 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:31.377910 kubelet[1908]: I0208 23:32:31.377823 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-kube-api-access-m84cx" (OuterVolumeSpecName: "kube-api-access-m84cx") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "kube-api-access-m84cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:32:31.379733 systemd[1]: var-lib-kubelet-pods-40d6d52f\x2d5115\x2d47ea\x2da934\x2d817897fab282-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm84cx.mount: Deactivated successfully. Feb 8 23:32:31.380000 systemd[1]: var-lib-kubelet-pods-40d6d52f\x2d5115\x2d47ea\x2da934\x2d817897fab282-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:31.393092 systemd[1]: var-lib-kubelet-pods-40d6d52f\x2d5115\x2d47ea\x2da934\x2d817897fab282-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:32:31.396570 kubelet[1908]: I0208 23:32:31.396524 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:31.397128 kubelet[1908]: I0208 23:32:31.397108 1908 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "40d6d52f-5115-47ea-a934-817897fab282" (UID: "40d6d52f-5115-47ea-a934-817897fab282"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:32:31.454790 kubelet[1908]: I0208 23:32:31.454586 1908 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-hostproc\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.455149 kubelet[1908]: I0208 23:32:31.455096 1908 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-m84cx\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-kube-api-access-m84cx\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.455482 kubelet[1908]: I0208 23:32:31.455409 1908 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40d6d52f-5115-47ea-a934-817897fab282-lib-modules\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.455787 kubelet[1908]: I0208 23:32:31.455735 1908 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-clustermesh-secrets\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.456076 kubelet[1908]: I0208 23:32:31.456045 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/40d6d52f-5115-47ea-a934-817897fab282-cilium-ipsec-secrets\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.456385 kubelet[1908]: I0208 23:32:31.456324 1908 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/40d6d52f-5115-47ea-a934-817897fab282-hubble-tls\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.456697 kubelet[1908]: I0208 23:32:31.456639 1908 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/40d6d52f-5115-47ea-a934-817897fab282-cilium-config-path\") on node \"ci-3510-3-2-9-f62ee4a992.novalocal\" DevicePath \"\"" Feb 8 23:32:31.792051 sshd[3746]: Accepted publickey for core from 172.24.4.1 port 44856 ssh2: RSA SHA256:hSCdy28aHh0WFAXHFi8tWlQhiCOOiQrn91fhtzGNenI Feb 8 23:32:31.794962 sshd[3746]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:32:31.805911 systemd-logind[1059]: New session 25 of user core. Feb 8 23:32:31.806929 systemd[1]: Started session-25.scope. Feb 8 23:32:31.839585 kubelet[1908]: W0208 23:32:31.839326 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d6d52f_5115_47ea_a934_817897fab282.slice/cri-containerd-4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b.scope WatchSource:0}: container "4b1bf584ad1debfb020f066d63531a260272fc547641e6468d18799941c90c6b" in namespace "k8s.io": not found Feb 8 23:32:32.151333 kubelet[1908]: I0208 23:32:32.150584 1908 scope.go:115] "RemoveContainer" containerID="d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429" Feb 8 23:32:32.158104 env[1068]: time="2024-02-08T23:32:32.158011132Z" level=info msg="RemoveContainer for \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\"" Feb 8 23:32:32.162083 systemd[1]: Removed slice kubepods-burstable-pod40d6d52f_5115_47ea_a934_817897fab282.slice. Feb 8 23:32:32.217185 env[1068]: time="2024-02-08T23:32:32.216918427Z" level=info msg="RemoveContainer for \"d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429\" returns successfully" Feb 8 23:32:32.284999 kubelet[1908]: I0208 23:32:32.284966 1908 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:32:32.285270 kubelet[1908]: E0208 23:32:32.285256 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d6d52f-5115-47ea-a934-817897fab282" containerName="mount-cgroup" Feb 8 23:32:32.285386 kubelet[1908]: I0208 23:32:32.285369 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="40d6d52f-5115-47ea-a934-817897fab282" containerName="mount-cgroup" Feb 8 23:32:32.285499 kubelet[1908]: E0208 23:32:32.285487 1908 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="40d6d52f-5115-47ea-a934-817897fab282" containerName="mount-cgroup" Feb 8 23:32:32.285616 kubelet[1908]: I0208 23:32:32.285604 1908 memory_manager.go:346] "RemoveStaleState removing state" podUID="40d6d52f-5115-47ea-a934-817897fab282" containerName="mount-cgroup" Feb 8 23:32:32.291149 systemd[1]: Created slice kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice. Feb 8 23:32:32.369413 kubelet[1908]: I0208 23:32:32.369288 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-cni-path\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.369897 kubelet[1908]: I0208 23:32:32.369885 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-lib-modules\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370051 kubelet[1908]: I0208 23:32:32.370026 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-cilium-run\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370250 kubelet[1908]: I0208 23:32:32.370180 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-cilium-cgroup\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370374 kubelet[1908]: I0208 23:32:32.370363 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-xtables-lock\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370508 kubelet[1908]: I0208 23:32:32.370496 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9888b744-f80a-4690-b68d-c26fff3e6865-cilium-config-path\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370635 kubelet[1908]: I0208 23:32:32.370625 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-host-proc-sys-net\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370791 kubelet[1908]: I0208 23:32:32.370769 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-bpf-maps\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.370923 kubelet[1908]: I0208 23:32:32.370913 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-etc-cni-netd\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371070 kubelet[1908]: I0208 23:32:32.371046 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9888b744-f80a-4690-b68d-c26fff3e6865-clustermesh-secrets\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371194 kubelet[1908]: I0208 23:32:32.371183 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9888b744-f80a-4690-b68d-c26fff3e6865-cilium-ipsec-secrets\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371339 kubelet[1908]: I0208 23:32:32.371329 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qppwd\" (UniqueName: \"kubernetes.io/projected/9888b744-f80a-4690-b68d-c26fff3e6865-kube-api-access-qppwd\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371469 kubelet[1908]: I0208 23:32:32.371459 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-host-proc-sys-kernel\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371606 kubelet[1908]: I0208 23:32:32.371595 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9888b744-f80a-4690-b68d-c26fff3e6865-hostproc\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.371737 kubelet[1908]: I0208 23:32:32.371727 1908 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9888b744-f80a-4690-b68d-c26fff3e6865-hubble-tls\") pod \"cilium-bmqdq\" (UID: \"9888b744-f80a-4690-b68d-c26fff3e6865\") " pod="kube-system/cilium-bmqdq" Feb 8 23:32:32.433901 kubelet[1908]: I0208 23:32:32.433811 1908 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=40d6d52f-5115-47ea-a934-817897fab282 path="/var/lib/kubelet/pods/40d6d52f-5115-47ea-a934-817897fab282/volumes" Feb 8 23:32:32.595438 env[1068]: time="2024-02-08T23:32:32.594663679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmqdq,Uid:9888b744-f80a-4690-b68d-c26fff3e6865,Namespace:kube-system,Attempt:0,}" Feb 8 23:32:32.619914 env[1068]: time="2024-02-08T23:32:32.619599151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:32:32.619914 env[1068]: time="2024-02-08T23:32:32.619664007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:32:32.619914 env[1068]: time="2024-02-08T23:32:32.619688374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:32:32.620510 env[1068]: time="2024-02-08T23:32:32.620366354Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4 pid=3806 runtime=io.containerd.runc.v2 Feb 8 23:32:32.640162 systemd[1]: Started cri-containerd-b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4.scope. Feb 8 23:32:32.694574 env[1068]: time="2024-02-08T23:32:32.694321920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmqdq,Uid:9888b744-f80a-4690-b68d-c26fff3e6865,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\"" Feb 8 23:32:32.701056 env[1068]: time="2024-02-08T23:32:32.700000898Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:32:32.725402 env[1068]: time="2024-02-08T23:32:32.725300907Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b\"" Feb 8 23:32:32.726093 env[1068]: time="2024-02-08T23:32:32.726017703Z" level=info msg="StartContainer for \"0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b\"" Feb 8 23:32:32.756737 systemd[1]: Started cri-containerd-0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b.scope. Feb 8 23:32:32.806308 env[1068]: time="2024-02-08T23:32:32.806255402Z" level=info msg="StartContainer for \"0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b\" returns successfully" Feb 8 23:32:32.831418 systemd[1]: cri-containerd-0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b.scope: Deactivated successfully. Feb 8 23:32:32.864379 env[1068]: time="2024-02-08T23:32:32.864301774Z" level=info msg="shim disconnected" id=0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b Feb 8 23:32:32.864379 env[1068]: time="2024-02-08T23:32:32.864369104Z" level=warning msg="cleaning up after shim disconnected" id=0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b namespace=k8s.io Feb 8 23:32:32.864379 env[1068]: time="2024-02-08T23:32:32.864381789Z" level=info msg="cleaning up dead shim" Feb 8 23:32:32.872426 env[1068]: time="2024-02-08T23:32:32.872387474Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3888 runtime=io.containerd.runc.v2\n" Feb 8 23:32:33.172332 env[1068]: time="2024-02-08T23:32:33.170632197Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:32:33.209570 env[1068]: time="2024-02-08T23:32:33.209416759Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c\"" Feb 8 23:32:33.211715 env[1068]: time="2024-02-08T23:32:33.211659211Z" level=info msg="StartContainer for \"93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c\"" Feb 8 23:32:33.244628 systemd[1]: Started cri-containerd-93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c.scope. Feb 8 23:32:33.298021 env[1068]: time="2024-02-08T23:32:33.297926878Z" level=info msg="StartContainer for \"93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c\" returns successfully" Feb 8 23:32:33.324966 systemd[1]: cri-containerd-93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c.scope: Deactivated successfully. Feb 8 23:32:33.367538 env[1068]: time="2024-02-08T23:32:33.366255722Z" level=info msg="shim disconnected" id=93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c Feb 8 23:32:33.367538 env[1068]: time="2024-02-08T23:32:33.366327240Z" level=warning msg="cleaning up after shim disconnected" id=93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c namespace=k8s.io Feb 8 23:32:33.367538 env[1068]: time="2024-02-08T23:32:33.366345034Z" level=info msg="cleaning up dead shim" Feb 8 23:32:33.378359 env[1068]: time="2024-02-08T23:32:33.378310528Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3951 runtime=io.containerd.runc.v2\n" Feb 8 23:32:34.177770 env[1068]: time="2024-02-08T23:32:34.177693918Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:32:34.215977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594735626.mount: Deactivated successfully. Feb 8 23:32:34.234793 env[1068]: time="2024-02-08T23:32:34.234675915Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7\"" Feb 8 23:32:34.236793 env[1068]: time="2024-02-08T23:32:34.236730682Z" level=info msg="StartContainer for \"82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7\"" Feb 8 23:32:34.270396 systemd[1]: Started cri-containerd-82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7.scope. Feb 8 23:32:34.316920 env[1068]: time="2024-02-08T23:32:34.316831046Z" level=info msg="StartContainer for \"82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7\" returns successfully" Feb 8 23:32:34.321694 systemd[1]: cri-containerd-82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7.scope: Deactivated successfully. Feb 8 23:32:34.359696 env[1068]: time="2024-02-08T23:32:34.359632279Z" level=info msg="shim disconnected" id=82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7 Feb 8 23:32:34.359958 env[1068]: time="2024-02-08T23:32:34.359938140Z" level=warning msg="cleaning up after shim disconnected" id=82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7 namespace=k8s.io Feb 8 23:32:34.360056 env[1068]: time="2024-02-08T23:32:34.360040887Z" level=info msg="cleaning up dead shim" Feb 8 23:32:34.374320 env[1068]: time="2024-02-08T23:32:34.374278982Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4010 runtime=io.containerd.runc.v2\n" Feb 8 23:32:34.481830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7-rootfs.mount: Deactivated successfully. Feb 8 23:32:34.616688 kubelet[1908]: E0208 23:32:34.616589 1908 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:32:34.951968 kubelet[1908]: W0208 23:32:34.951863 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40d6d52f_5115_47ea_a934_817897fab282.slice/cri-containerd-d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429.scope WatchSource:0}: container "d81402e39ebf118b84f09a58ef7ac68ed40cc774b3e9e370e48a637306cd3429" in namespace "k8s.io": not found Feb 8 23:32:35.193277 env[1068]: time="2024-02-08T23:32:35.191699820Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:32:35.288562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1869419473.mount: Deactivated successfully. Feb 8 23:32:35.305602 env[1068]: time="2024-02-08T23:32:35.305489676Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22\"" Feb 8 23:32:35.307280 env[1068]: time="2024-02-08T23:32:35.307170448Z" level=info msg="StartContainer for \"715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22\"" Feb 8 23:32:35.367637 systemd[1]: Started cri-containerd-715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22.scope. Feb 8 23:32:35.411643 systemd[1]: cri-containerd-715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22.scope: Deactivated successfully. Feb 8 23:32:35.414643 env[1068]: time="2024-02-08T23:32:35.414557642Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice/cri-containerd-715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22.scope/memory.events\": no such file or directory" Feb 8 23:32:35.424630 env[1068]: time="2024-02-08T23:32:35.424531682Z" level=info msg="StartContainer for \"715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22\" returns successfully" Feb 8 23:32:35.462472 env[1068]: time="2024-02-08T23:32:35.462427545Z" level=info msg="shim disconnected" id=715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22 Feb 8 23:32:35.462734 env[1068]: time="2024-02-08T23:32:35.462715239Z" level=warning msg="cleaning up after shim disconnected" id=715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22 namespace=k8s.io Feb 8 23:32:35.462807 env[1068]: time="2024-02-08T23:32:35.462792128Z" level=info msg="cleaning up dead shim" Feb 8 23:32:35.471049 env[1068]: time="2024-02-08T23:32:35.471006714Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:32:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4063 runtime=io.containerd.runc.v2\n" Feb 8 23:32:35.481971 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22-rootfs.mount: Deactivated successfully. Feb 8 23:32:36.200834 env[1068]: time="2024-02-08T23:32:36.200771436Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:32:36.244793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2959698506.mount: Deactivated successfully. Feb 8 23:32:36.265580 env[1068]: time="2024-02-08T23:32:36.265425992Z" level=info msg="CreateContainer within sandbox \"b9be50992a01bbb4a5f9473e2517bed4d080e3fc2749f353a4f8424ee35903f4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba\"" Feb 8 23:32:36.267023 env[1068]: time="2024-02-08T23:32:36.266880566Z" level=info msg="StartContainer for \"277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba\"" Feb 8 23:32:36.319363 systemd[1]: Started cri-containerd-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba.scope. Feb 8 23:32:36.372644 env[1068]: time="2024-02-08T23:32:36.372592218Z" level=info msg="StartContainer for \"277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba\" returns successfully" Feb 8 23:32:37.504630 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:32:37.556260 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm_base(ctr(aes-generic),ghash-generic)))) Feb 8 23:32:38.068527 kubelet[1908]: W0208 23:32:38.068364 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice/cri-containerd-0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b.scope WatchSource:0}: task 0309a672575301fc470e98795557dea4dd42100c1edbd7fcc5f08f7d44fe2a3b not found: not found Feb 8 23:32:38.661934 systemd[1]: run-containerd-runc-k8s.io-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba-runc.JRKQZY.mount: Deactivated successfully. Feb 8 23:32:40.593505 systemd-networkd[976]: lxc_health: Link UP Feb 8 23:32:40.618571 systemd-networkd[976]: lxc_health: Gained carrier Feb 8 23:32:40.619253 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:32:40.641298 kubelet[1908]: I0208 23:32:40.641265 1908 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bmqdq" podStartSLOduration=8.640335955 podCreationTimestamp="2024-02-08 23:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:32:37.245613769 +0000 UTC m=+153.115447884" watchObservedRunningTime="2024-02-08 23:32:40.640335955 +0000 UTC m=+156.510170060" Feb 8 23:32:40.960469 systemd[1]: run-containerd-runc-k8s.io-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba-runc.46ZHZP.mount: Deactivated successfully. Feb 8 23:32:41.180094 kubelet[1908]: W0208 23:32:41.180050 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice/cri-containerd-93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c.scope WatchSource:0}: task 93cc94db48a78bdc384e36a5a8a9f2fae1e6324060fbfcf711e1fc1dd898933c not found: not found Feb 8 23:32:41.882426 systemd-networkd[976]: lxc_health: Gained IPv6LL Feb 8 23:32:43.199718 systemd[1]: run-containerd-runc-k8s.io-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba-runc.gUc1WZ.mount: Deactivated successfully. Feb 8 23:32:44.290121 kubelet[1908]: W0208 23:32:44.289993 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice/cri-containerd-82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7.scope WatchSource:0}: task 82a6298701a8da6987b5525afd90fc348edc0eeac5b44ce606c510371203a0e7 not found: not found Feb 8 23:32:45.430533 systemd[1]: run-containerd-runc-k8s.io-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba-runc.hdCtHw.mount: Deactivated successfully. Feb 8 23:32:47.400634 kubelet[1908]: W0208 23:32:47.400495 1908 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9888b744_f80a_4690_b68d_c26fff3e6865.slice/cri-containerd-715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22.scope WatchSource:0}: task 715b6896e75e6788bdf56c9e003638e838b63073317f533a5502a9a83280ef22 not found: not found Feb 8 23:32:47.676268 systemd[1]: run-containerd-runc-k8s.io-277d571f6eabd5d1c456f34fd35bc7a4839145ff8bf47d00b356317548855aba-runc.phK5Pl.mount: Deactivated successfully. Feb 8 23:32:47.956784 sshd[3746]: pam_unix(sshd:session): session closed for user core Feb 8 23:32:47.963349 systemd-logind[1059]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:32:47.963759 systemd[1]: sshd@24-172.24.4.155:22-172.24.4.1:44856.service: Deactivated successfully. Feb 8 23:32:47.965406 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:32:47.968588 systemd-logind[1059]: Removed session 25.