Feb 12 20:28:42.135819 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:28:42.135865 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:28:42.135882 kernel: BIOS-provided physical RAM map: Feb 12 20:28:42.135893 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 12 20:28:42.135959 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 12 20:28:42.135974 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 12 20:28:42.135994 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 12 20:28:42.136009 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 12 20:28:42.136022 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 12 20:28:42.136037 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 12 20:28:42.136050 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 12 20:28:42.136064 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 12 20:28:42.136078 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 12 20:28:42.136093 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 12 20:28:42.136114 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 12 20:28:42.136130 kernel: NX (Execute Disable) protection: active Feb 12 20:28:42.136145 kernel: efi: EFI v2.70 by EDK II Feb 12 20:28:42.136161 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe379198 RNG=0xbfb73018 TPMEventLog=0xbe2bd018 Feb 12 20:28:42.136175 kernel: random: crng init done Feb 12 20:28:42.136190 kernel: SMBIOS 2.4 present. Feb 12 20:28:42.136205 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023 Feb 12 20:28:42.136219 kernel: Hypervisor detected: KVM Feb 12 20:28:42.136238 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:28:42.136253 kernel: kvm-clock: cpu 0, msr 25faa001, primary cpu clock Feb 12 20:28:42.136268 kernel: kvm-clock: using sched offset of 13608149014 cycles Feb 12 20:28:42.136284 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:28:42.136300 kernel: tsc: Detected 2299.998 MHz processor Feb 12 20:28:42.136315 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:28:42.136331 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:28:42.136346 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 12 20:28:42.136361 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:28:42.136376 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 12 20:28:42.136395 kernel: Using GB pages for direct mapping Feb 12 20:28:42.136410 kernel: Secure boot disabled Feb 12 20:28:42.136425 kernel: ACPI: Early table checksum verification disabled Feb 12 20:28:42.136440 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 12 20:28:42.136456 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 12 20:28:42.136472 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 12 20:28:42.136487 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 12 20:28:42.136503 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 12 20:28:42.136529 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Feb 12 20:28:42.136545 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 12 20:28:42.136562 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 12 20:28:42.136578 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 12 20:28:42.136594 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 12 20:28:42.136611 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 12 20:28:42.136631 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 12 20:28:42.136648 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 12 20:28:42.136664 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 12 20:28:42.136681 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 12 20:28:42.136697 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 12 20:28:42.136713 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 12 20:28:42.136730 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 12 20:28:42.136747 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 12 20:28:42.136763 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 12 20:28:42.136784 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 20:28:42.136800 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 20:28:42.136817 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 12 20:28:42.136833 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 12 20:28:42.136850 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 12 20:28:42.136868 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 12 20:28:42.136885 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 12 20:28:42.136902 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 12 20:28:42.136937 kernel: Zone ranges: Feb 12 20:28:42.136958 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:28:42.136975 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 20:28:42.136992 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 12 20:28:42.137008 kernel: Movable zone start for each node Feb 12 20:28:42.137025 kernel: Early memory node ranges Feb 12 20:28:42.137041 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 12 20:28:42.137058 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 12 20:28:42.137074 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 12 20:28:42.137091 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 12 20:28:42.137111 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 12 20:28:42.137127 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 12 20:28:42.137144 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:28:42.137161 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 12 20:28:42.137177 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 12 20:28:42.137194 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 12 20:28:42.137209 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 12 20:28:42.137226 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 20:28:42.137242 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:28:42.137262 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:28:42.137279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:28:42.137295 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:28:42.137311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:28:42.137328 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:28:42.137345 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:28:42.137361 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 20:28:42.137378 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 12 20:28:42.137394 kernel: Booting paravirtualized kernel on KVM Feb 12 20:28:42.137414 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:28:42.137431 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 20:28:42.137448 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 20:28:42.137465 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 20:28:42.137481 kernel: pcpu-alloc: [0] 0 1 Feb 12 20:28:42.137497 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:28:42.137514 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:28:42.137530 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Feb 12 20:28:42.137546 kernel: Policy zone: Normal Feb 12 20:28:42.137569 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:28:42.137587 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:28:42.137603 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 20:28:42.137618 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:28:42.137634 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:28:42.137651 kernel: Memory: 7536508K/7860584K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 323816K reserved, 0K cma-reserved) Feb 12 20:28:42.137669 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:28:42.137685 kernel: Kernel/User page tables isolation: enabled Feb 12 20:28:42.137706 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:28:42.137722 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:28:42.137739 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:28:42.137758 kernel: rcu: RCU event tracing is enabled. Feb 12 20:28:42.137774 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:28:42.137792 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:28:42.137808 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:28:42.137825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:28:42.137842 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:28:42.137863 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 20:28:42.137892 kernel: Console: colour dummy device 80x25 Feb 12 20:28:42.138168 kernel: printk: console [ttyS0] enabled Feb 12 20:28:42.138198 kernel: ACPI: Core revision 20210730 Feb 12 20:28:42.138216 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:28:42.138233 kernel: x2apic enabled Feb 12 20:28:42.138250 kernel: Switched APIC routing to physical x2apic. Feb 12 20:28:42.138267 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 12 20:28:42.138284 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 12 20:28:42.138302 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 12 20:28:42.138324 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 12 20:28:42.138478 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 12 20:28:42.138496 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:28:42.138513 kernel: Spectre V2 : Mitigation: IBRS Feb 12 20:28:42.138531 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:28:42.138662 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:28:42.138686 kernel: RETBleed: Mitigation: IBRS Feb 12 20:28:42.138704 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:28:42.138722 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Feb 12 20:28:42.138740 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:28:42.138874 kernel: MDS: Mitigation: Clear CPU buffers Feb 12 20:28:42.138893 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 20:28:42.138964 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:28:42.139055 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:28:42.139071 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:28:42.139092 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:28:42.139108 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:28:42.139125 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:28:42.139141 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:28:42.139157 kernel: LSM: Security Framework initializing Feb 12 20:28:42.139173 kernel: SELinux: Initializing. Feb 12 20:28:42.139189 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:28:42.139206 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 20:28:42.139223 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 12 20:28:42.139243 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 12 20:28:42.139260 kernel: signal: max sigframe size: 1776 Feb 12 20:28:42.139276 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:28:42.139293 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 20:28:42.139310 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:28:42.139326 kernel: x86: Booting SMP configuration: Feb 12 20:28:42.139343 kernel: .... node #0, CPUs: #1 Feb 12 20:28:42.139360 kernel: kvm-clock: cpu 1, msr 25faa041, secondary cpu clock Feb 12 20:28:42.139378 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 12 20:28:42.139401 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 20:28:42.139417 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:28:42.139433 kernel: smpboot: Max logical packages: 1 Feb 12 20:28:42.139450 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 12 20:28:42.139467 kernel: devtmpfs: initialized Feb 12 20:28:42.139484 kernel: x86/mm: Memory block size: 128MB Feb 12 20:28:42.139501 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 12 20:28:42.139519 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:28:42.139535 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:28:42.139556 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:28:42.139572 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:28:42.139589 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:28:42.139606 kernel: audit: type=2000 audit(1707769720.870:1): state=initialized audit_enabled=0 res=1 Feb 12 20:28:42.139623 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:28:42.139641 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:28:42.139658 kernel: cpuidle: using governor menu Feb 12 20:28:42.139675 kernel: ACPI: bus type PCI registered Feb 12 20:28:42.139692 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:28:42.139713 kernel: dca service started, version 1.12.1 Feb 12 20:28:42.139729 kernel: PCI: Using configuration type 1 for base access Feb 12 20:28:42.139746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:28:42.139763 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:28:42.139780 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:28:42.139797 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:28:42.139812 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:28:42.139829 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:28:42.139845 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:28:42.139866 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:28:42.139882 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:28:42.139899 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:28:42.139936 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 12 20:28:42.144107 kernel: ACPI: Interpreter enabled Feb 12 20:28:42.144146 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:28:42.144163 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:28:42.144181 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:28:42.144198 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 12 20:28:42.144222 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:28:42.144468 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:28:42.144638 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 20:28:42.144661 kernel: PCI host bridge to bus 0000:00 Feb 12 20:28:42.144819 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:28:42.144994 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:28:42.145161 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:28:42.145305 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 12 20:28:42.145444 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:28:42.145617 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:28:42.145788 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 12 20:28:42.146215 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:28:42.146637 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 20:28:42.147004 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 12 20:28:42.147257 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 12 20:28:42.147435 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 12 20:28:42.147642 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:28:42.147807 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 12 20:28:42.154292 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 12 20:28:42.154592 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:28:42.154762 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:28:42.154957 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 12 20:28:42.154979 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:28:42.154995 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:28:42.155010 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:28:42.155025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:28:42.155039 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:28:42.155062 kernel: iommu: Default domain type: Translated Feb 12 20:28:42.155079 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:28:42.155098 kernel: vgaarb: loaded Feb 12 20:28:42.155115 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:28:42.155133 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:28:42.155149 kernel: PTP clock support registered Feb 12 20:28:42.155166 kernel: Registered efivars operations Feb 12 20:28:42.155183 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:28:42.155201 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:28:42.155222 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 12 20:28:42.155238 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 12 20:28:42.155252 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 12 20:28:42.155267 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 12 20:28:42.155284 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:28:42.155301 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:28:42.155319 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:28:42.155337 kernel: pnp: PnP ACPI init Feb 12 20:28:42.155353 kernel: pnp: PnP ACPI: found 7 devices Feb 12 20:28:42.155371 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:28:42.155385 kernel: NET: Registered PF_INET protocol family Feb 12 20:28:42.155399 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 20:28:42.155414 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 20:28:42.155430 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:28:42.155447 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:28:42.155462 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 20:28:42.155477 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 20:28:42.155493 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 20:28:42.155513 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 20:28:42.155529 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:28:42.155550 kernel: NET: Registered PF_XDP protocol family Feb 12 20:28:42.155735 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:28:42.155878 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:28:42.156041 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:28:42.156178 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 12 20:28:42.156338 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:28:42.156365 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:28:42.156382 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 20:28:42.156400 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Feb 12 20:28:42.156417 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 20:28:42.156434 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 12 20:28:42.156451 kernel: clocksource: Switched to clocksource tsc Feb 12 20:28:42.156469 kernel: Initialise system trusted keyrings Feb 12 20:28:42.156485 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 20:28:42.156506 kernel: Key type asymmetric registered Feb 12 20:28:42.156522 kernel: Asymmetric key parser 'x509' registered Feb 12 20:28:42.156539 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:28:42.156556 kernel: io scheduler mq-deadline registered Feb 12 20:28:42.156573 kernel: io scheduler kyber registered Feb 12 20:28:42.156590 kernel: io scheduler bfq registered Feb 12 20:28:42.156607 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:28:42.156625 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:28:42.156781 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 12 20:28:42.156806 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:28:42.156981 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 12 20:28:42.157002 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:28:42.157156 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 12 20:28:42.157178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:28:42.157195 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:28:42.157212 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 20:28:42.157229 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 12 20:28:42.157247 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 12 20:28:42.157560 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 12 20:28:42.157604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:28:42.157623 kernel: i8042: Warning: Keylock active Feb 12 20:28:42.157640 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:28:42.157658 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:28:42.157902 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 12 20:28:42.158085 kernel: rtc_cmos 00:00: registered as rtc0 Feb 12 20:28:42.158259 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T20:28:41 UTC (1707769721) Feb 12 20:28:42.158411 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 12 20:28:42.158434 kernel: intel_pstate: CPU model not supported Feb 12 20:28:42.158451 kernel: pstore: Registered efi as persistent store backend Feb 12 20:28:42.158475 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:28:42.158492 kernel: Segment Routing with IPv6 Feb 12 20:28:42.158510 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:28:42.158527 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:28:42.158545 kernel: Key type dns_resolver registered Feb 12 20:28:42.158568 kernel: IPI shorthand broadcast: enabled Feb 12 20:28:42.158586 kernel: sched_clock: Marking stable (761266159, 181501489)->(1026766999, -83999351) Feb 12 20:28:42.158604 kernel: registered taskstats version 1 Feb 12 20:28:42.158622 kernel: Loading compiled-in X.509 certificates Feb 12 20:28:42.158640 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:28:42.158658 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:28:42.158676 kernel: Key type .fscrypt registered Feb 12 20:28:42.158693 kernel: Key type fscrypt-provisioning registered Feb 12 20:28:42.158711 kernel: pstore: Using crash dump compression: deflate Feb 12 20:28:42.158731 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:28:42.158749 kernel: ima: No architecture policies found Feb 12 20:28:42.158766 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:28:42.158784 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:28:42.158801 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:28:42.158825 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:28:42.158842 kernel: Run /init as init process Feb 12 20:28:42.158856 kernel: with arguments: Feb 12 20:28:42.158876 kernel: /init Feb 12 20:28:42.158892 kernel: with environment: Feb 12 20:28:42.158908 kernel: HOME=/ Feb 12 20:28:42.161580 kernel: TERM=linux Feb 12 20:28:42.161599 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:28:42.161621 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:28:42.161642 systemd[1]: Detected virtualization kvm. Feb 12 20:28:42.161790 systemd[1]: Detected architecture x86-64. Feb 12 20:28:42.161821 systemd[1]: Running in initrd. Feb 12 20:28:42.161838 systemd[1]: No hostname configured, using default hostname. Feb 12 20:28:42.161854 systemd[1]: Hostname set to . Feb 12 20:28:42.161872 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:28:42.161888 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:28:42.162070 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:28:42.162088 systemd[1]: Reached target cryptsetup.target. Feb 12 20:28:42.162234 systemd[1]: Reached target paths.target. Feb 12 20:28:42.162259 systemd[1]: Reached target slices.target. Feb 12 20:28:42.162275 systemd[1]: Reached target swap.target. Feb 12 20:28:42.162292 systemd[1]: Reached target timers.target. Feb 12 20:28:42.162310 systemd[1]: Listening on iscsid.socket. Feb 12 20:28:42.162451 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:28:42.162472 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:28:42.162490 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:28:42.162511 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:28:42.162532 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:28:42.175693 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:28:42.175756 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:28:42.175775 systemd[1]: Reached target sockets.target. Feb 12 20:28:42.175793 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:28:42.175817 systemd[1]: Finished network-cleanup.service. Feb 12 20:28:42.175837 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:28:42.175855 systemd[1]: Starting systemd-journald.service... Feb 12 20:28:42.175881 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:28:42.175900 systemd[1]: Starting systemd-resolved.service... Feb 12 20:28:42.175931 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:28:42.175969 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:28:42.175993 kernel: audit: type=1130 audit(1707769722.138:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.176013 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:28:42.176033 kernel: audit: type=1130 audit(1707769722.146:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.176055 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:28:42.176074 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:28:42.176092 kernel: audit: type=1130 audit(1707769722.165:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.176110 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:28:42.176135 systemd-journald[190]: Journal started Feb 12 20:28:42.176236 systemd-journald[190]: Runtime Journal (/run/log/journal/8278613bd14fef4f2fabbe5f55962520) is 8.0M, max 148.8M, 140.8M free. Feb 12 20:28:42.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.166961 systemd-modules-load[191]: Inserted module 'overlay' Feb 12 20:28:42.187326 systemd[1]: Started systemd-journald.service. Feb 12 20:28:42.187387 kernel: audit: type=1130 audit(1707769722.178:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.188408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:28:42.193106 kernel: audit: type=1130 audit(1707769722.186:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.206481 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:28:42.217223 kernel: audit: type=1130 audit(1707769722.209:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.217017 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:28:42.227784 systemd-resolved[192]: Positive Trust Anchors: Feb 12 20:28:42.228165 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:28:42.228228 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:28:42.249064 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:28:42.238867 systemd-resolved[192]: Defaulting to hostname 'linux'. Feb 12 20:28:42.253761 dracut-cmdline[205]: dracut-dracut-053 Feb 12 20:28:42.253761 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:28:42.268052 kernel: Bridge firewalling registered Feb 12 20:28:42.241221 systemd[1]: Started systemd-resolved.service. Feb 12 20:28:42.254341 systemd-modules-load[191]: Inserted module 'br_netfilter' Feb 12 20:28:42.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.283139 systemd[1]: Reached target nss-lookup.target. Feb 12 20:28:42.292074 kernel: audit: type=1130 audit(1707769722.281:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.293935 kernel: SCSI subsystem initialized Feb 12 20:28:42.312294 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:28:42.312374 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:28:42.313278 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:28:42.319098 systemd-modules-load[191]: Inserted module 'dm_multipath' Feb 12 20:28:42.320750 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:28:42.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.333390 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:28:42.345632 kernel: audit: type=1130 audit(1707769722.330:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.350033 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:28:42.363067 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:28:42.363106 kernel: audit: type=1130 audit(1707769722.352:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.367945 kernel: iscsi: registered transport (tcp) Feb 12 20:28:42.393219 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:28:42.393312 kernel: QLogic iSCSI HBA Driver Feb 12 20:28:42.438037 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:28:42.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.439412 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:28:42.501971 kernel: raid6: avx2x4 gen() 17942 MB/s Feb 12 20:28:42.522996 kernel: raid6: avx2x4 xor() 6714 MB/s Feb 12 20:28:42.543960 kernel: raid6: avx2x2 gen() 17302 MB/s Feb 12 20:28:42.564963 kernel: raid6: avx2x2 xor() 18300 MB/s Feb 12 20:28:42.585956 kernel: raid6: avx2x1 gen() 13620 MB/s Feb 12 20:28:42.607966 kernel: raid6: avx2x1 xor() 15821 MB/s Feb 12 20:28:42.628949 kernel: raid6: sse2x4 gen() 11020 MB/s Feb 12 20:28:42.649948 kernel: raid6: sse2x4 xor() 6655 MB/s Feb 12 20:28:42.670945 kernel: raid6: sse2x2 gen() 11828 MB/s Feb 12 20:28:42.691953 kernel: raid6: sse2x2 xor() 7243 MB/s Feb 12 20:28:42.712959 kernel: raid6: sse2x1 gen() 10397 MB/s Feb 12 20:28:42.739253 kernel: raid6: sse2x1 xor() 5155 MB/s Feb 12 20:28:42.739397 kernel: raid6: using algorithm avx2x4 gen() 17942 MB/s Feb 12 20:28:42.739423 kernel: raid6: .... xor() 6714 MB/s, rmw enabled Feb 12 20:28:42.744795 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:28:42.769957 kernel: xor: automatically using best checksumming function avx Feb 12 20:28:42.883961 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:28:42.895321 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:28:42.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.894000 audit: BPF prog-id=7 op=LOAD Feb 12 20:28:42.894000 audit: BPF prog-id=8 op=LOAD Feb 12 20:28:42.896699 systemd[1]: Starting systemd-udevd.service... Feb 12 20:28:42.913752 systemd-udevd[389]: Using default interface naming scheme 'v252'. Feb 12 20:28:42.933196 systemd[1]: Started systemd-udevd.service. Feb 12 20:28:42.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:42.943357 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:28:42.958093 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Feb 12 20:28:43.000452 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:28:42.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:43.001588 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:28:43.067818 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:28:43.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:43.153092 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:28:43.182938 kernel: scsi host0: Virtio SCSI HBA Feb 12 20:28:43.220948 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:28:43.221034 kernel: AES CTR mode by8 optimization enabled Feb 12 20:28:43.259974 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 12 20:28:43.334960 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 12 20:28:43.335295 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 12 20:28:43.335491 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 12 20:28:43.340000 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 12 20:28:43.340232 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 12 20:28:43.367944 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:28:43.368057 kernel: GPT:17805311 != 25165823 Feb 12 20:28:43.368083 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:28:43.374056 kernel: GPT:17805311 != 25165823 Feb 12 20:28:43.377778 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:28:43.383036 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 20:28:43.395552 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 12 20:28:43.450940 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (433) Feb 12 20:28:43.451220 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:28:43.473520 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:28:43.484498 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:28:43.492380 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:28:43.536868 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:28:43.538166 systemd[1]: Starting disk-uuid.service... Feb 12 20:28:43.560532 disk-uuid[509]: Primary Header is updated. Feb 12 20:28:43.560532 disk-uuid[509]: Secondary Entries is updated. Feb 12 20:28:43.560532 disk-uuid[509]: Secondary Header is updated. Feb 12 20:28:43.594060 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 20:28:43.599946 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 20:28:43.626949 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 20:28:44.618852 disk-uuid[510]: The operation has completed successfully. Feb 12 20:28:44.629064 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 20:28:44.686295 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:28:44.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:44.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:44.686427 systemd[1]: Finished disk-uuid.service. Feb 12 20:28:44.697608 systemd[1]: Starting verity-setup.service... Feb 12 20:28:44.731037 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 20:28:44.814168 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:28:44.828419 systemd[1]: Finished verity-setup.service. Feb 12 20:28:44.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:44.839345 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:28:44.943956 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:28:44.944179 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:28:44.944560 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:28:45.012084 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:28:45.012132 kernel: BTRFS info (device sda6): using free space tree Feb 12 20:28:45.012156 kernel: BTRFS info (device sda6): has skinny extents Feb 12 20:28:45.012177 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 20:28:44.945511 systemd[1]: Starting ignition-setup.service... Feb 12 20:28:44.972575 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:28:45.018028 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:28:45.036638 systemd[1]: Finished ignition-setup.service. Feb 12 20:28:45.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.049223 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:28:45.096297 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:28:45.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.095000 audit: BPF prog-id=9 op=LOAD Feb 12 20:28:45.098732 systemd[1]: Starting systemd-networkd.service... Feb 12 20:28:45.134123 systemd-networkd[684]: lo: Link UP Feb 12 20:28:45.134139 systemd-networkd[684]: lo: Gained carrier Feb 12 20:28:45.135234 systemd-networkd[684]: Enumeration completed Feb 12 20:28:45.135616 systemd-networkd[684]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:28:45.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.135829 systemd[1]: Started systemd-networkd.service. Feb 12 20:28:45.137691 systemd-networkd[684]: eth0: Link UP Feb 12 20:28:45.137699 systemd-networkd[684]: eth0: Gained carrier Feb 12 20:28:45.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.148073 systemd-networkd[684]: eth0: DHCPv4 address 10.128.0.46/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 12 20:28:45.254124 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:28:45.254124 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 20:28:45.254124 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:28:45.254124 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:28:45.254124 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:28:45.254124 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:28:45.254124 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:28:45.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.166277 systemd[1]: Reached target network.target. Feb 12 20:28:45.375685 ignition[638]: Ignition 2.14.0 Feb 12 20:28:45.191766 systemd[1]: Starting iscsiuio.service... Feb 12 20:28:45.375702 ignition[638]: Stage: fetch-offline Feb 12 20:28:45.216308 systemd[1]: Started iscsiuio.service. Feb 12 20:28:45.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.375792 ignition[638]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:45.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.226722 systemd[1]: Starting iscsid.service... Feb 12 20:28:45.375845 ignition[638]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:45.241406 systemd[1]: Started iscsid.service. Feb 12 20:28:45.412304 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:45.262714 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:28:45.412659 ignition[638]: parsed url from cmdline: "" Feb 12 20:28:45.328524 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:28:45.412668 ignition[638]: no config URL provided Feb 12 20:28:45.336515 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:28:45.412679 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:28:45.372098 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:28:45.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.412695 ignition[638]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:28:45.390085 systemd[1]: Reached target remote-fs.target. Feb 12 20:28:45.412709 ignition[638]: failed to fetch config: resource requires networking Feb 12 20:28:45.400875 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:28:45.413983 ignition[638]: Ignition finished successfully Feb 12 20:28:45.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.426543 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:28:45.471514 ignition[708]: Ignition 2.14.0 Feb 12 20:28:45.442440 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:28:45.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.471526 ignition[708]: Stage: fetch Feb 12 20:28:45.459478 systemd[1]: Starting ignition-fetch.service... Feb 12 20:28:45.471679 ignition[708]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:45.536888 unknown[708]: fetched base config from "system" Feb 12 20:28:45.471712 ignition[708]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:45.536898 unknown[708]: fetched base config from "system" Feb 12 20:28:45.478760 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:45.536906 unknown[708]: fetched user config from "gcp" Feb 12 20:28:45.478988 ignition[708]: parsed url from cmdline: "" Feb 12 20:28:45.540519 systemd[1]: Finished ignition-fetch.service. Feb 12 20:28:45.478993 ignition[708]: no config URL provided Feb 12 20:28:45.545455 systemd[1]: Starting ignition-kargs.service... Feb 12 20:28:45.479001 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:28:45.581507 systemd[1]: Finished ignition-kargs.service. Feb 12 20:28:45.479012 ignition[708]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:28:45.606476 systemd[1]: Starting ignition-disks.service... Feb 12 20:28:45.479048 ignition[708]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 12 20:28:45.629645 systemd[1]: Finished ignition-disks.service. Feb 12 20:28:45.487679 ignition[708]: GET result: OK Feb 12 20:28:45.643335 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:28:45.487801 ignition[708]: parsing config with SHA512: 28af6c32c176995067d1269cda67ead38eadce2d692f1111d5a181219da4b4f347e8b82de560a37636c57d225b2ee6357316f6b6fe82e8a25191eb31b3e25c8f Feb 12 20:28:45.660098 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:28:45.537829 ignition[708]: fetch: fetch complete Feb 12 20:28:45.673116 systemd[1]: Reached target local-fs.target. Feb 12 20:28:45.537839 ignition[708]: fetch: fetch passed Feb 12 20:28:45.687136 systemd[1]: Reached target sysinit.target. Feb 12 20:28:45.537898 ignition[708]: Ignition finished successfully Feb 12 20:28:45.703117 systemd[1]: Reached target basic.target. Feb 12 20:28:45.558960 ignition[714]: Ignition 2.14.0 Feb 12 20:28:45.717377 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:28:45.558969 ignition[714]: Stage: kargs Feb 12 20:28:45.559102 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:45.559137 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:45.567016 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:45.570850 ignition[714]: kargs: kargs passed Feb 12 20:28:45.570943 ignition[714]: Ignition finished successfully Feb 12 20:28:45.619100 ignition[720]: Ignition 2.14.0 Feb 12 20:28:45.619108 ignition[720]: Stage: disks Feb 12 20:28:45.619246 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:45.619271 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:45.626602 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:45.628378 ignition[720]: disks: disks passed Feb 12 20:28:45.628444 ignition[720]: Ignition finished successfully Feb 12 20:28:45.766276 systemd-fsck[728]: ROOT: clean, 602/1628000 files, 124050/1617920 blocks Feb 12 20:28:45.933970 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:28:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:45.935293 systemd[1]: Mounting sysroot.mount... Feb 12 20:28:45.973106 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:28:45.966382 systemd[1]: Mounted sysroot.mount. Feb 12 20:28:45.980404 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:28:46.001733 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:28:46.016682 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:28:46.016784 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:28:46.016835 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:28:46.038507 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:28:46.066900 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:28:46.095123 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (734) Feb 12 20:28:46.114183 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:28:46.114273 kernel: BTRFS info (device sda6): using free space tree Feb 12 20:28:46.114296 kernel: BTRFS info (device sda6): has skinny extents Feb 12 20:28:46.125182 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:28:46.134638 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:28:46.162088 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 20:28:46.158881 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:28:46.170148 initrd-setup-root[765]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:28:46.188074 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:28:46.199080 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:28:46.255250 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:28:46.295263 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 20:28:46.295298 kernel: audit: type=1130 audit(1707769726.253:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.256656 systemd[1]: Starting ignition-mount.service... Feb 12 20:28:46.303264 systemd[1]: Starting sysroot-boot.service... Feb 12 20:28:46.317465 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:28:46.317584 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:28:46.341057 ignition[799]: INFO : Ignition 2.14.0 Feb 12 20:28:46.341057 ignition[799]: INFO : Stage: mount Feb 12 20:28:46.341057 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:46.341057 ignition[799]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:46.442152 kernel: audit: type=1130 audit(1707769726.361:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.442200 kernel: audit: type=1130 audit(1707769726.394:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:46.442420 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:46.442420 ignition[799]: INFO : mount: mount passed Feb 12 20:28:46.442420 ignition[799]: INFO : Ignition finished successfully Feb 12 20:28:46.512506 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (809) Feb 12 20:28:46.512544 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:28:46.512560 kernel: BTRFS info (device sda6): using free space tree Feb 12 20:28:46.512583 kernel: BTRFS info (device sda6): has skinny extents Feb 12 20:28:46.512597 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 12 20:28:46.341192 systemd-networkd[684]: eth0: Gained IPv6LL Feb 12 20:28:46.352275 systemd[1]: Finished sysroot-boot.service. Feb 12 20:28:46.363426 systemd[1]: Finished ignition-mount.service. Feb 12 20:28:46.397926 systemd[1]: Starting ignition-files.service... Feb 12 20:28:46.551070 ignition[828]: INFO : Ignition 2.14.0 Feb 12 20:28:46.551070 ignition[828]: INFO : Stage: files Feb 12 20:28:46.551070 ignition[828]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:46.551070 ignition[828]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:46.453421 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:28:46.607110 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:46.607110 ignition[828]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:28:46.607110 ignition[828]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:28:46.607110 ignition[828]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:28:46.607110 ignition[828]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:28:46.607110 ignition[828]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:28:46.607110 ignition[828]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:28:46.607110 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:28:46.607110 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:28:46.517082 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:28:46.572679 unknown[828]: wrote ssh authorized keys file for user: core Feb 12 20:28:46.896967 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:28:47.163894 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:28:47.188123 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:28:47.188123 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:28:47.188123 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:28:47.257681 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:28:47.370393 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:28:47.398112 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (832) Feb 12 20:28:47.398152 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Feb 12 20:28:47.398152 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:28:47.398152 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3166389326" Feb 12 20:28:47.398152 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3166389326": device or resource busy Feb 12 20:28:47.398152 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3166389326", trying btrfs: device or resource busy Feb 12 20:28:47.398152 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3166389326" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3166389326" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem3166389326" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem3166389326" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:28:47.503115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:28:47.406825 systemd[1]: mnt-oem3166389326.mount: Deactivated successfully. Feb 12 20:28:47.611115 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:28:47.704111 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:28:47.728177 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:28:47.728177 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:28:47.728177 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:28:47.808471 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 20:28:48.121924 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(a): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem86552641" Feb 12 20:28:48.146104 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem86552641": device or resource busy Feb 12 20:28:48.146104 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem86552641", trying btrfs: device or resource busy Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem86552641" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem86552641" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem86552641" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem86552641" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:28:48.146104 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:28:48.136490 systemd[1]: mnt-oem86552641.mount: Deactivated successfully. Feb 12 20:28:48.374134 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Feb 12 20:28:48.427756 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(f): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 20:28:48.452121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:28:48.452121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:28:48.452121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:28:48.452121 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Feb 12 20:28:49.118637 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(10): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 20:28:49.146088 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:28:49.146088 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:28:49.146088 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:28:49.146088 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:28:49.146088 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:28:49.276629 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Feb 12 20:28:49.379631 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:28:49.379631 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem41585562" Feb 12 20:28:49.412066 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem41585562": device or resource busy Feb 12 20:28:49.412066 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem41585562", trying btrfs: device or resource busy Feb 12 20:28:49.412066 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem41585562" Feb 12 20:28:49.896123 kernel: audit: type=1130 audit(1707769729.444:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896179 kernel: audit: type=1130 audit(1707769729.542:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896214 kernel: audit: type=1130 audit(1707769729.581:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896237 kernel: audit: type=1131 audit(1707769729.581:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896294 kernel: audit: type=1130 audit(1707769729.688:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896317 kernel: audit: type=1131 audit(1707769729.688:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.896331 kernel: audit: type=1130 audit(1707769729.856:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.402606 systemd[1]: mnt-oem41585562.mount: Deactivated successfully. Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem41585562" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem41585562" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem41585562" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1558258341" Feb 12 20:28:49.913073 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1558258341": device or resource busy Feb 12 20:28:49.913073 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(1c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1558258341", trying btrfs: device or resource busy Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1558258341" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1558258341" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [started] unmounting "/mnt/oem1558258341" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [finished] unmounting "/mnt/oem1558258341" Feb 12 20:28:49.913073 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Feb 12 20:28:49.913073 ignition[828]: INFO : files: op(20): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:28:49.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.430176 systemd[1]: mnt-oem1558258341.mount: Deactivated successfully. Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(20): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(21): [started] processing unit "oem-gce.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(21): [finished] processing unit "oem-gce.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(22): [started] processing unit "oem-gce-enable-oslogin.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(22): [finished] processing unit "oem-gce-enable-oslogin.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(23): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(23): op(24): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(23): op(24): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(23): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(25): [started] processing unit "prepare-critools.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(25): op(26): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(25): op(26): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(25): [finished] processing unit "prepare-critools.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(27): [started] processing unit "prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(27): op(28): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(27): op(28): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(27): [finished] processing unit "prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(29): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(29): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:28:50.271222 ignition[828]: INFO : files: op(2a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:28:50.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.445637 systemd[1]: Finished ignition-files.service. Feb 12 20:28:50.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.670332 iscsid[693]: iscsid shutting down. Feb 12 20:28:50.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2b): [started] setting preset to enabled for "oem-gce.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2b): [finished] setting preset to enabled for "oem-gce.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2c): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2c): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2e): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: op(2e): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:28:50.686290 ignition[828]: INFO : files: createResultFile: createFiles: op(2f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:28:50.686290 ignition[828]: INFO : files: createResultFile: createFiles: op(2f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:28:50.686290 ignition[828]: INFO : files: files passed Feb 12 20:28:50.686290 ignition[828]: INFO : Ignition finished successfully Feb 12 20:28:50.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:50.967444 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:28:49.457035 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:28:49.488323 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:28:51.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.489465 systemd[1]: Starting ignition-quench.service... Feb 12 20:28:51.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.037000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:28:49.518559 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:28:49.544554 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:28:49.544706 systemd[1]: Finished ignition-quench.service. Feb 12 20:28:51.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.583530 systemd[1]: Reached target ignition-complete.target. Feb 12 20:28:51.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.110297 ignition[866]: INFO : Ignition 2.14.0 Feb 12 20:28:51.110297 ignition[866]: INFO : Stage: umount Feb 12 20:28:51.110297 ignition[866]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:28:51.110297 ignition[866]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Feb 12 20:28:51.110297 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 12 20:28:51.110297 ignition[866]: INFO : umount: umount passed Feb 12 20:28:51.110297 ignition[866]: INFO : Ignition finished successfully Feb 12 20:28:51.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.646388 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:28:51.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.688759 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:28:51.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.688886 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:28:51.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.690446 systemd[1]: Reached target initrd-fs.target. Feb 12 20:28:49.777275 systemd[1]: Reached target initrd.target. Feb 12 20:28:49.796450 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:28:51.329284 kernel: kauditd_printk_skb: 30 callbacks suppressed Feb 12 20:28:51.329318 kernel: audit: type=1131 audit(1707769731.290:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.797731 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:28:51.367147 kernel: audit: type=1131 audit(1707769731.336:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.829491 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:28:51.425158 kernel: audit: type=1130 audit(1707769731.374:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.425204 kernel: audit: type=1131 audit(1707769731.374:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:51.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:49.859812 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:28:49.913634 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:28:49.930426 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:28:49.951530 systemd[1]: Stopped target timers.target. Feb 12 20:28:49.972448 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:28:49.972637 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:28:49.995624 systemd[1]: Stopped target initrd.target. Feb 12 20:28:51.498124 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Feb 12 20:28:50.019478 systemd[1]: Stopped target basic.target. Feb 12 20:28:50.042514 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:28:50.092462 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:28:50.113494 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:28:50.145372 systemd[1]: Stopped target remote-fs.target. Feb 12 20:28:50.170273 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:28:50.193390 systemd[1]: Stopped target sysinit.target. Feb 12 20:28:50.216326 systemd[1]: Stopped target local-fs.target. Feb 12 20:28:50.242294 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:28:50.263259 systemd[1]: Stopped target swap.target. Feb 12 20:28:50.278262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:28:50.278486 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:28:50.299477 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:28:50.318245 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:28:50.318465 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:28:50.338462 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:28:50.338656 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:28:50.351594 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:28:50.351806 systemd[1]: Stopped ignition-files.service. Feb 12 20:28:50.373062 systemd[1]: Stopping ignition-mount.service... Feb 12 20:28:50.415468 systemd[1]: Stopping iscsid.service... Feb 12 20:28:50.451094 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:28:50.451381 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:28:50.472748 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:28:50.513154 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:28:50.513447 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:28:50.525570 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:28:50.525747 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:28:50.547815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:28:50.548678 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:28:50.548792 systemd[1]: Stopped iscsid.service. Feb 12 20:28:50.567056 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:28:50.567165 systemd[1]: Stopped ignition-mount.service. Feb 12 20:28:50.609925 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:28:50.610060 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:28:50.630867 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:28:50.631078 systemd[1]: Stopped ignition-disks.service. Feb 12 20:28:50.663227 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:28:50.663354 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:28:50.679222 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:28:50.679304 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:28:50.695212 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:28:50.695300 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:28:50.718230 systemd[1]: Stopped target paths.target. Feb 12 20:28:50.739252 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:28:50.743106 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:28:50.761145 systemd[1]: Stopped target slices.target. Feb 12 20:28:50.761317 systemd[1]: Stopped target sockets.target. Feb 12 20:28:50.791330 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:28:50.791418 systemd[1]: Closed iscsid.socket. Feb 12 20:28:50.813161 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:28:50.813261 systemd[1]: Stopped ignition-setup.service. Feb 12 20:28:50.835242 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:28:50.835325 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:28:50.857375 systemd[1]: Stopping iscsiuio.service... Feb 12 20:28:50.877528 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:28:50.877647 systemd[1]: Stopped iscsiuio.service. Feb 12 20:28:50.897656 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:28:50.897781 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:28:50.922214 systemd[1]: Stopped target network.target. Feb 12 20:28:50.944139 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:28:50.944215 systemd[1]: Closed iscsiuio.socket. Feb 12 20:28:50.958381 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:28:50.962001 systemd-networkd[684]: eth0: DHCPv6 lease lost Feb 12 20:28:51.506000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:28:50.974388 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:28:50.995590 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:28:50.995719 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:28:51.014110 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:28:51.014264 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:28:51.039177 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:28:51.039248 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:28:51.054223 systemd[1]: Stopping network-cleanup.service... Feb 12 20:28:51.069108 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:28:51.069359 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:28:51.087413 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:28:51.087494 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:28:51.102368 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:28:51.102433 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:28:51.118438 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:28:51.127114 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:28:51.128222 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:28:51.128397 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:28:51.156715 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:28:51.156816 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:28:51.175351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:28:51.175404 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:28:51.192332 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:28:51.192403 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:28:51.220260 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:28:51.220338 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:28:51.237217 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:28:51.237287 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:28:51.255494 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:28:51.276102 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:28:51.276248 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:28:51.320517 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:28:51.320666 systemd[1]: Stopped network-cleanup.service. Feb 12 20:28:51.338552 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:28:51.338699 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:28:51.376588 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:28:51.434224 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:28:51.462374 systemd[1]: Switching root. Feb 12 20:28:51.510401 systemd-journald[190]: Journal stopped Feb 12 20:28:56.285819 kernel: audit: type=1334 audit(1707769731.506:78): prog-id=9 op=UNLOAD Feb 12 20:28:56.288614 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:28:56.288660 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:28:56.288685 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:28:56.288713 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:28:56.288739 kernel: SELinux: policy capability open_perms=1 Feb 12 20:28:56.288762 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:28:56.288783 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:28:56.288814 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:28:56.288840 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:28:56.288862 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:28:56.288888 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:28:56.288927 kernel: audit: type=1403 audit(1707769731.846:79): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:28:56.288953 systemd[1]: Successfully loaded SELinux policy in 130.450ms. Feb 12 20:28:56.288995 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.645ms. Feb 12 20:28:56.289021 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:28:56.289045 systemd[1]: Detected virtualization kvm. Feb 12 20:28:56.289072 systemd[1]: Detected architecture x86-64. Feb 12 20:28:56.289107 systemd[1]: Detected first boot. Feb 12 20:28:56.289130 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:28:56.289154 kernel: audit: type=1400 audit(1707769732.025:80): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:28:56.289176 kernel: audit: type=1400 audit(1707769732.025:81): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:28:56.289198 kernel: audit: type=1334 audit(1707769732.045:82): prog-id=10 op=LOAD Feb 12 20:28:56.289220 kernel: audit: type=1334 audit(1707769732.045:83): prog-id=10 op=UNLOAD Feb 12 20:28:56.289245 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:28:56.289267 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:28:56.289291 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:28:56.289320 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:28:56.289345 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:28:56.289371 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:28:56.289400 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:28:56.289424 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:28:56.289452 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:28:56.289475 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:28:56.289498 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:28:56.289521 systemd[1]: Created slice system-getty.slice. Feb 12 20:28:56.289543 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:28:56.289568 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:28:56.289592 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:28:56.289615 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:28:56.289642 systemd[1]: Created slice user.slice. Feb 12 20:28:56.289669 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:28:56.289692 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:28:56.289716 systemd[1]: Set up automount boot.automount. Feb 12 20:28:56.289739 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:28:56.289763 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:28:56.289786 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:28:56.289809 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:28:56.289837 systemd[1]: Reached target integritysetup.target. Feb 12 20:28:56.289860 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:28:56.289886 systemd[1]: Reached target remote-fs.target. Feb 12 20:28:56.289920 systemd[1]: Reached target slices.target. Feb 12 20:28:56.289943 systemd[1]: Reached target swap.target. Feb 12 20:28:56.289966 systemd[1]: Reached target torcx.target. Feb 12 20:28:56.289989 systemd[1]: Reached target veritysetup.target. Feb 12 20:28:56.290013 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:28:56.290035 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:28:56.290060 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:28:56.290087 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:28:56.290111 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:28:56.290134 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:28:56.290157 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:28:56.290180 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:28:56.290203 systemd[1]: Mounting media.mount... Feb 12 20:28:56.290227 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:28:56.290250 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:28:56.290273 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:28:56.290300 systemd[1]: Mounting tmp.mount... Feb 12 20:28:56.290322 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:28:56.290345 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:28:56.290369 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:28:56.290399 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:28:56.290422 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:28:56.290445 systemd[1]: Starting modprobe@drm.service... Feb 12 20:28:56.290468 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:28:56.290491 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:28:56.290518 systemd[1]: Starting modprobe@loop.service... Feb 12 20:28:56.290540 kernel: fuse: init (API version 7.34) Feb 12 20:28:56.290562 kernel: loop: module loaded Feb 12 20:28:56.290586 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:28:56.290608 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:28:56.290632 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:28:56.290657 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:28:56.290693 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:28:56.290719 systemd[1]: Stopped systemd-journald.service. Feb 12 20:28:56.290743 systemd[1]: Starting systemd-journald.service... Feb 12 20:28:56.290766 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:28:56.290789 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:28:56.290813 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:28:56.290843 systemd-journald[989]: Journal started Feb 12 20:28:56.290951 systemd-journald[989]: Runtime Journal (/run/log/journal/8278613bd14fef4f2fabbe5f55962520) is 8.0M, max 148.8M, 140.8M free. Feb 12 20:28:51.846000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:28:52.025000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:28:52.025000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:28:52.045000 audit: BPF prog-id=10 op=LOAD Feb 12 20:28:52.045000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:28:52.067000 audit: BPF prog-id=11 op=LOAD Feb 12 20:28:52.067000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:28:52.230000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:28:52.230000 audit[899]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:28:52.230000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:28:52.243000 audit[899]: AVC avc: denied { associate } for pid=899 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:28:52.243000 audit[899]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b5 a2=1ed a3=0 items=2 ppid=882 pid=899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:28:52.243000 audit: CWD cwd="/" Feb 12 20:28:52.243000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:52.243000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:52.243000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:28:55.444000 audit: BPF prog-id=12 op=LOAD Feb 12 20:28:55.444000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:28:55.444000 audit: BPF prog-id=13 op=LOAD Feb 12 20:28:55.444000 audit: BPF prog-id=14 op=LOAD Feb 12 20:28:55.444000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:28:55.444000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:28:55.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:55.459000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:28:55.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:55.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.236000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.237000 audit: BPF prog-id=15 op=LOAD Feb 12 20:28:56.237000 audit: BPF prog-id=16 op=LOAD Feb 12 20:28:56.237000 audit: BPF prog-id=17 op=LOAD Feb 12 20:28:56.237000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:28:56.237000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:28:56.280000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:28:56.280000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd15c8b490 a2=4000 a3=7ffd15c8b52c items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:28:56.280000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:28:52.227054 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:28:55.443197 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:28:52.228291 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:28:55.447448 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:28:52.228333 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:28:52.228403 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:28:52.228426 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:28:52.228489 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:28:52.228517 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:28:52.228858 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:28:52.228967 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:28:52.228998 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:28:52.230487 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:28:52.230560 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:28:52.230599 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:28:52.230632 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:28:52.230670 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:28:52.230700 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:28:54.797237 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:28:54.797534 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:28:54.797681 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:28:54.797923 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:28:54.797982 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:28:54.798051 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2024-02-12T20:28:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:28:56.310964 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:28:56.330460 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:28:56.330579 systemd[1]: Stopped verity-setup.service. Feb 12 20:28:56.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.355791 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:28:56.355908 kernel: kauditd_printk_skb: 33 callbacks suppressed Feb 12 20:28:56.355973 kernel: audit: type=1131 audit(1707769736.336:108): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.387962 systemd[1]: Started systemd-journald.service. Feb 12 20:28:56.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.415424 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:28:56.419935 kernel: audit: type=1130 audit(1707769736.395:109): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.427312 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:28:56.434303 systemd[1]: Mounted media.mount. Feb 12 20:28:56.441259 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:28:56.450314 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:28:56.459281 systemd[1]: Mounted tmp.mount. Feb 12 20:28:56.466465 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:28:56.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.475665 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:28:56.497994 kernel: audit: type=1130 audit(1707769736.473:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.506601 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:28:56.506892 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:28:56.528996 kernel: audit: type=1130 audit(1707769736.504:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.537625 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:28:56.537838 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:28:56.581997 kernel: audit: type=1130 audit(1707769736.535:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.582235 kernel: audit: type=1131 audit(1707769736.535:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.591661 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:28:56.591862 systemd[1]: Finished modprobe@drm.service. Feb 12 20:28:56.637187 kernel: audit: type=1130 audit(1707769736.589:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.637310 kernel: audit: type=1131 audit(1707769736.589:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.646659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:28:56.647086 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:28:56.690347 kernel: audit: type=1130 audit(1707769736.644:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.690469 kernel: audit: type=1131 audit(1707769736.644:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.699560 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:28:56.699777 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:28:56.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.708543 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:28:56.708760 systemd[1]: Finished modprobe@loop.service. Feb 12 20:28:56.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.717583 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:28:56.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.726559 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:28:56.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.735551 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:28:56.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.744532 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:28:56.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.753779 systemd[1]: Reached target network-pre.target. Feb 12 20:28:56.763578 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:28:56.773515 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:28:56.781064 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:28:56.783975 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:28:56.792785 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:28:56.799818 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:28:56.802168 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:28:56.809093 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:28:56.811791 systemd-journald[989]: Time spent on flushing to /var/log/journal/8278613bd14fef4f2fabbe5f55962520 is 62.452ms for 1193 entries. Feb 12 20:28:56.811791 systemd-journald[989]: System Journal (/var/log/journal/8278613bd14fef4f2fabbe5f55962520) is 8.0M, max 584.8M, 576.8M free. Feb 12 20:28:56.903116 systemd-journald[989]: Received client request to flush runtime journal. Feb 12 20:28:56.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.810856 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:28:56.827997 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:28:56.838064 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:28:56.906359 udevadm[1003]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 12 20:28:56.848874 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:28:56.857266 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:28:56.866572 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:28:56.875580 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:28:56.888875 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:28:56.904787 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:28:56.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:56.915582 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:28:56.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:57.511833 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:28:57.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:57.519000 audit: BPF prog-id=18 op=LOAD Feb 12 20:28:57.519000 audit: BPF prog-id=19 op=LOAD Feb 12 20:28:57.519000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:28:57.519000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:28:57.522061 systemd[1]: Starting systemd-udevd.service... Feb 12 20:28:57.545956 systemd-udevd[1006]: Using default interface naming scheme 'v252'. Feb 12 20:28:57.600468 systemd[1]: Started systemd-udevd.service. Feb 12 20:28:57.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:57.610000 audit: BPF prog-id=20 op=LOAD Feb 12 20:28:57.613129 systemd[1]: Starting systemd-networkd.service... Feb 12 20:28:57.625000 audit: BPF prog-id=21 op=LOAD Feb 12 20:28:57.625000 audit: BPF prog-id=22 op=LOAD Feb 12 20:28:57.625000 audit: BPF prog-id=23 op=LOAD Feb 12 20:28:57.628534 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:28:57.671676 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:28:57.699590 systemd[1]: Started systemd-userdbd.service. Feb 12 20:28:57.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:57.802737 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1010) Feb 12 20:28:57.860998 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:28:57.863826 systemd-networkd[1019]: lo: Link UP Feb 12 20:28:57.863840 systemd-networkd[1019]: lo: Gained carrier Feb 12 20:28:57.864598 systemd-networkd[1019]: Enumeration completed Feb 12 20:28:57.864855 systemd[1]: Started systemd-networkd.service. Feb 12 20:28:57.865019 systemd-networkd[1019]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:28:57.868211 systemd-networkd[1019]: eth0: Link UP Feb 12 20:28:57.868224 systemd-networkd[1019]: eth0: Gained carrier Feb 12 20:28:57.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:57.883143 systemd-networkd[1019]: eth0: DHCPv4 address 10.128.0.46/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 12 20:28:57.887186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:28:57.919944 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:28:57.906000 audit[1011]: AVC avc: denied { confidentiality } for pid=1011 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:28:57.906000 audit[1011]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e1c5a88430 a1=32194 a2=7f671ff3abc5 a3=5 items=108 ppid=1006 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:28:57.906000 audit: CWD cwd="/" Feb 12 20:28:57.906000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=1 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=2 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=3 name=(null) inode=13640 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=4 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=5 name=(null) inode=13641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=6 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=7 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=8 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=9 name=(null) inode=13643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=10 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=11 name=(null) inode=13644 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=12 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=13 name=(null) inode=13645 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=14 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=15 name=(null) inode=13646 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=16 name=(null) inode=13642 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=17 name=(null) inode=13647 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=18 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=19 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=20 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=21 name=(null) inode=13649 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=22 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=23 name=(null) inode=13650 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=24 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=25 name=(null) inode=13651 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=26 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=27 name=(null) inode=13652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=28 name=(null) inode=13648 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=29 name=(null) inode=13653 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=30 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=31 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=32 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=33 name=(null) inode=13655 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=34 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=35 name=(null) inode=13656 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=36 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=37 name=(null) inode=13657 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=38 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=39 name=(null) inode=13658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=40 name=(null) inode=13654 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=41 name=(null) inode=13659 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=42 name=(null) inode=13639 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=43 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=44 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=45 name=(null) inode=13661 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=46 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=47 name=(null) inode=13662 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=48 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=49 name=(null) inode=13663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=50 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=51 name=(null) inode=13664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=52 name=(null) inode=13660 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=53 name=(null) inode=13665 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=55 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=56 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=57 name=(null) inode=13667 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=58 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=59 name=(null) inode=13668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=60 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=61 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=62 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=63 name=(null) inode=13670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=64 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=65 name=(null) inode=13671 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=66 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=67 name=(null) inode=13672 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=68 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=69 name=(null) inode=13673 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=70 name=(null) inode=13669 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=71 name=(null) inode=13674 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=72 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=73 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=74 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=75 name=(null) inode=13676 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=76 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=77 name=(null) inode=13677 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=78 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=79 name=(null) inode=13678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=80 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=81 name=(null) inode=13679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=82 name=(null) inode=13675 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=83 name=(null) inode=13680 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=84 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=85 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=86 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=87 name=(null) inode=13682 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=88 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=89 name=(null) inode=13683 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=90 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=91 name=(null) inode=13684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=92 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=93 name=(null) inode=13685 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=94 name=(null) inode=13681 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=95 name=(null) inode=13686 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=96 name=(null) inode=13666 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=97 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=98 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=99 name=(null) inode=13688 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=100 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=101 name=(null) inode=13689 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=102 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=103 name=(null) inode=13690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=104 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=105 name=(null) inode=13691 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=106 name=(null) inode=13687 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PATH item=107 name=(null) inode=13692 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:28:57.906000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:28:57.942019 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:28:57.953937 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 12 20:28:57.974950 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 12 20:28:57.984956 kernel: ACPI: button: Sleep Button [SLPF] Feb 12 20:28:57.998968 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 12 20:28:58.018980 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:28:58.040501 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:28:58.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.050927 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:28:58.088753 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:28:58.124698 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:28:58.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.133310 systemd[1]: Reached target cryptsetup.target. Feb 12 20:28:58.144191 systemd[1]: Starting lvm2-activation.service... Feb 12 20:28:58.151395 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:28:58.183204 systemd[1]: Finished lvm2-activation.service. Feb 12 20:28:58.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.192434 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:28:58.202205 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:28:58.202272 systemd[1]: Reached target local-fs.target. Feb 12 20:28:58.211145 systemd[1]: Reached target machines.target. Feb 12 20:28:58.221943 systemd[1]: Starting ldconfig.service... Feb 12 20:28:58.230660 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:28:58.231346 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:28:58.233676 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:28:58.244077 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:28:58.256374 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:28:58.266821 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:28:58.266944 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:28:58.269140 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:28:58.270123 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1046 (bootctl) Feb 12 20:28:58.272846 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:28:58.284483 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:28:58.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.315771 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:28:58.326547 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:28:58.342281 systemd-tmpfiles[1050]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:28:58.450801 systemd-fsck[1055]: fsck.fat 4.2 (2021-01-31) Feb 12 20:28:58.450801 systemd-fsck[1055]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 20:28:58.453137 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:28:58.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.466504 systemd[1]: Mounting boot.mount... Feb 12 20:28:58.508246 systemd[1]: Mounted boot.mount. Feb 12 20:28:58.565061 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:28:58.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.818275 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:28:58.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.829182 systemd[1]: Starting audit-rules.service... Feb 12 20:28:58.837708 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:28:58.849106 systemd[1]: Starting oem-gce-enable-oslogin.service... Feb 12 20:28:58.860163 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:28:58.868000 audit: BPF prog-id=24 op=LOAD Feb 12 20:28:58.872852 systemd[1]: Starting systemd-resolved.service... Feb 12 20:28:58.879000 audit: BPF prog-id=25 op=LOAD Feb 12 20:28:58.883698 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:28:58.893181 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:28:58.904441 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:28:58.912000 audit[1079]: SYSTEM_BOOT pid=1079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.918697 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:28:58.921997 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Feb 12 20:28:58.922293 systemd[1]: Finished oem-gce-enable-oslogin.service. Feb 12 20:28:58.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.932128 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:28:58.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:58.954490 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:28:58.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:28:59.054669 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:28:59.057355 augenrules[1090]: No rules Feb 12 20:28:59.056815 systemd-timesyncd[1076]: Contacted time server 169.254.169.254:123 (169.254.169.254). Feb 12 20:28:59.058149 systemd-timesyncd[1076]: Initial clock synchronization to Mon 2024-02-12 20:28:59.140879 UTC. Feb 12 20:28:59.054000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:28:59.054000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe320a3a80 a2=420 a3=0 items=0 ppid=1060 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:28:59.054000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:28:59.059422 systemd-resolved[1074]: Positive Trust Anchors: Feb 12 20:28:59.059937 systemd-resolved[1074]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:28:59.060086 systemd-resolved[1074]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:28:59.064702 systemd[1]: Finished audit-rules.service. Feb 12 20:28:59.073257 systemd[1]: Reached target time-set.target. Feb 12 20:28:59.096450 systemd-resolved[1074]: Defaulting to hostname 'linux'. Feb 12 20:28:59.100192 systemd[1]: Started systemd-resolved.service. Feb 12 20:28:59.109202 systemd[1]: Reached target network.target. Feb 12 20:28:59.118102 systemd[1]: Reached target nss-lookup.target. Feb 12 20:28:59.141085 systemd-networkd[1019]: eth0: Gained IPv6LL Feb 12 20:28:59.240081 ldconfig[1045]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:28:59.463327 systemd[1]: Finished ldconfig.service. Feb 12 20:28:59.473239 systemd[1]: Starting systemd-update-done.service... Feb 12 20:28:59.483214 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:28:59.484250 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:28:59.493530 systemd[1]: Finished systemd-update-done.service. Feb 12 20:28:59.502343 systemd[1]: Reached target sysinit.target. Feb 12 20:28:59.511245 systemd[1]: Started motdgen.path. Feb 12 20:28:59.518207 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:28:59.528377 systemd[1]: Started logrotate.timer. Feb 12 20:28:59.536340 systemd[1]: Started mdadm.timer. Feb 12 20:28:59.543330 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:28:59.552120 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:28:59.552188 systemd[1]: Reached target paths.target. Feb 12 20:28:59.559086 systemd[1]: Reached target timers.target. Feb 12 20:28:59.567070 systemd[1]: Listening on dbus.socket. Feb 12 20:28:59.575521 systemd[1]: Starting docker.socket... Feb 12 20:28:59.588767 systemd[1]: Listening on sshd.socket. Feb 12 20:28:59.596246 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:28:59.597053 systemd[1]: Listening on docker.socket. Feb 12 20:28:59.604313 systemd[1]: Reached target sockets.target. Feb 12 20:28:59.613103 systemd[1]: Reached target basic.target. Feb 12 20:28:59.620167 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:28:59.620215 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:28:59.621876 systemd[1]: Starting containerd.service... Feb 12 20:28:59.630653 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:28:59.641007 systemd[1]: Starting dbus.service... Feb 12 20:28:59.649083 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:28:59.661264 systemd[1]: Starting extend-filesystems.service... Feb 12 20:28:59.664387 jq[1102]: false Feb 12 20:28:59.668093 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:28:59.670020 systemd[1]: Starting motdgen.service... Feb 12 20:28:59.679158 systemd[1]: Starting oem-gce.service... Feb 12 20:28:59.688821 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:28:59.697119 systemd[1]: Starting prepare-critools.service... Feb 12 20:28:59.704928 systemd[1]: Starting prepare-helm.service... Feb 12 20:28:59.714793 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:28:59.724239 systemd[1]: Starting sshd-keygen.service... Feb 12 20:28:59.726157 extend-filesystems[1103]: Found sda Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda1 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda2 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda3 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found usr Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda4 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda6 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda7 Feb 12 20:28:59.742179 extend-filesystems[1103]: Found sda9 Feb 12 20:28:59.742179 extend-filesystems[1103]: Checking size of /dev/sda9 Feb 12 20:28:59.736368 systemd[1]: Starting systemd-logind.service... Feb 12 20:28:59.749786 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:28:59.749902 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 12 20:28:59.837304 jq[1129]: true Feb 12 20:28:59.750730 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:28:59.753191 systemd[1]: Starting update-engine.service... Feb 12 20:28:59.770710 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:28:59.777049 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:28:59.777442 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:28:59.839345 tar[1132]: ./ Feb 12 20:28:59.839345 tar[1132]: ./loopback Feb 12 20:28:59.778020 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:28:59.778252 systemd[1]: Finished motdgen.service. Feb 12 20:28:59.841331 mkfs.ext4[1139]: mke2fs 1.46.5 (30-Dec-2021) Feb 12 20:28:59.841331 mkfs.ext4[1139]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Feb 12 20:28:59.841331 mkfs.ext4[1139]: Creating filesystem with 262144 4k blocks and 65536 inodes Feb 12 20:28:59.841331 mkfs.ext4[1139]: Filesystem UUID: 3bfba20d-4304-47a8-afb7-cf96fcf99b7e Feb 12 20:28:59.841331 mkfs.ext4[1139]: Superblock backups stored on blocks: Feb 12 20:28:59.841331 mkfs.ext4[1139]: 32768, 98304, 163840, 229376 Feb 12 20:28:59.841331 mkfs.ext4[1139]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 12 20:28:59.841331 mkfs.ext4[1139]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 12 20:28:59.841331 mkfs.ext4[1139]: Creating journal (8192 blocks): done Feb 12 20:28:59.796959 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:28:59.797474 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:28:59.845257 mkfs.ext4[1139]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Feb 12 20:28:59.846652 dbus-daemon[1101]: [system] SELinux support is enabled Feb 12 20:28:59.847679 systemd[1]: Started dbus.service. Feb 12 20:28:59.852197 extend-filesystems[1103]: Resized partition /dev/sda9 Feb 12 20:28:59.894781 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 12 20:28:59.860147 dbus-daemon[1101]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1019 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 20:28:59.858770 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:28:59.895403 jq[1137]: true Feb 12 20:28:59.895555 extend-filesystems[1145]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:28:59.887047 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 20:28:59.858821 systemd[1]: Reached target system-config.target. Feb 12 20:28:59.865430 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:28:59.865465 systemd[1]: Reached target user-config.target. Feb 12 20:28:59.894588 systemd[1]: Starting systemd-hostnamed.service... Feb 12 20:28:59.923248 umount[1153]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Feb 12 20:28:59.931501 tar[1133]: crictl Feb 12 20:28:59.939053 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 12 20:28:59.964949 kernel: loop0: detected capacity change from 0 to 2097152 Feb 12 20:28:59.969041 extend-filesystems[1145]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 12 20:28:59.969041 extend-filesystems[1145]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 12 20:28:59.969041 extend-filesystems[1145]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 12 20:29:00.072166 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:29:00.072310 update_engine[1127]: I0212 20:28:59.981313 1127 main.cc:92] Flatcar Update Engine starting Feb 12 20:29:00.072310 update_engine[1127]: I0212 20:29:00.006791 1127 update_check_scheduler.cc:74] Next update check in 3m59s Feb 12 20:29:00.072763 tar[1134]: linux-amd64/helm Feb 12 20:29:00.073102 bash[1170]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:28:59.970710 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:29:00.073463 extend-filesystems[1103]: Resized filesystem in /dev/sda9 Feb 12 20:28:59.971029 systemd[1]: Finished extend-filesystems.service. Feb 12 20:29:00.006537 systemd[1]: Started update-engine.service. Feb 12 20:29:00.034118 systemd[1]: Started locksmithd.service. Feb 12 20:29:00.055289 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:29:00.207308 tar[1132]: ./bandwidth Feb 12 20:29:00.209470 env[1138]: time="2024-02-12T20:29:00.209403395Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:29:00.212698 systemd-logind[1122]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:29:00.219308 systemd-logind[1122]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 12 20:29:00.219603 systemd-logind[1122]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:29:00.224667 systemd-logind[1122]: New seat seat0. Feb 12 20:29:00.240038 systemd[1]: Started systemd-logind.service. Feb 12 20:29:00.270157 tar[1132]: ./ptp Feb 12 20:29:00.365907 env[1138]: time="2024-02-12T20:29:00.365772422Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:29:00.368242 env[1138]: time="2024-02-12T20:29:00.368191205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.375841 env[1138]: time="2024-02-12T20:29:00.375769988Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:29:00.375841 env[1138]: time="2024-02-12T20:29:00.375835564Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.376249 env[1138]: time="2024-02-12T20:29:00.376209003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:29:00.376338 env[1138]: time="2024-02-12T20:29:00.376250504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.376338 env[1138]: time="2024-02-12T20:29:00.376274786Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:29:00.376338 env[1138]: time="2024-02-12T20:29:00.376293628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.376477 env[1138]: time="2024-02-12T20:29:00.376426016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.376819 env[1138]: time="2024-02-12T20:29:00.376783578Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:29:00.388500 env[1138]: time="2024-02-12T20:29:00.388427857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:29:00.388500 env[1138]: time="2024-02-12T20:29:00.388493781Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:29:00.388752 env[1138]: time="2024-02-12T20:29:00.388640925Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:29:00.388752 env[1138]: time="2024-02-12T20:29:00.388663216Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:29:00.396438 env[1138]: time="2024-02-12T20:29:00.396384358Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:29:00.396654 env[1138]: time="2024-02-12T20:29:00.396628066Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:29:00.396773 env[1138]: time="2024-02-12T20:29:00.396751204Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:29:00.396916 env[1138]: time="2024-02-12T20:29:00.396894055Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397046 env[1138]: time="2024-02-12T20:29:00.397023972Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397164 env[1138]: time="2024-02-12T20:29:00.397145507Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397271 env[1138]: time="2024-02-12T20:29:00.397251561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397373 env[1138]: time="2024-02-12T20:29:00.397353534Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397467 env[1138]: time="2024-02-12T20:29:00.397448318Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397561 env[1138]: time="2024-02-12T20:29:00.397542179Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397663 env[1138]: time="2024-02-12T20:29:00.397644458Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.397770 env[1138]: time="2024-02-12T20:29:00.397742530Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:29:00.398063 env[1138]: time="2024-02-12T20:29:00.398035847Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:29:00.398308 env[1138]: time="2024-02-12T20:29:00.398286134Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:29:00.398839 env[1138]: time="2024-02-12T20:29:00.398808878Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:29:00.399021 env[1138]: time="2024-02-12T20:29:00.398996115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399132 env[1138]: time="2024-02-12T20:29:00.399112111Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:29:00.399300 env[1138]: time="2024-02-12T20:29:00.399277944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399412 env[1138]: time="2024-02-12T20:29:00.399391665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399573 env[1138]: time="2024-02-12T20:29:00.399553853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399677 env[1138]: time="2024-02-12T20:29:00.399658022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399786 env[1138]: time="2024-02-12T20:29:00.399756984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.399881 env[1138]: time="2024-02-12T20:29:00.399863801Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400006 env[1138]: time="2024-02-12T20:29:00.399986334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400107 env[1138]: time="2024-02-12T20:29:00.400089842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400217 env[1138]: time="2024-02-12T20:29:00.400199698Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:29:00.400531 env[1138]: time="2024-02-12T20:29:00.400505313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400667 env[1138]: time="2024-02-12T20:29:00.400646207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400766 env[1138]: time="2024-02-12T20:29:00.400747622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.400871 env[1138]: time="2024-02-12T20:29:00.400852289Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:29:00.401008 env[1138]: time="2024-02-12T20:29:00.400984304Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:29:00.401999 env[1138]: time="2024-02-12T20:29:00.401966382Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:29:00.402147 env[1138]: time="2024-02-12T20:29:00.402122039Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:29:00.405051 env[1138]: time="2024-02-12T20:29:00.405017695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:29:00.407359 env[1138]: time="2024-02-12T20:29:00.407260863Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:29:00.411182 env[1138]: time="2024-02-12T20:29:00.411149694Z" level=info msg="Connect containerd service" Feb 12 20:29:00.411388 env[1138]: time="2024-02-12T20:29:00.411363281Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:29:00.427380 env[1138]: time="2024-02-12T20:29:00.427310141Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:29:00.427834 env[1138]: time="2024-02-12T20:29:00.427766122Z" level=info msg="Start subscribing containerd event" Feb 12 20:29:00.428060 env[1138]: time="2024-02-12T20:29:00.428035522Z" level=info msg="Start recovering state" Feb 12 20:29:00.428271 env[1138]: time="2024-02-12T20:29:00.428254235Z" level=info msg="Start event monitor" Feb 12 20:29:00.428412 env[1138]: time="2024-02-12T20:29:00.428388315Z" level=info msg="Start snapshots syncer" Feb 12 20:29:00.428555 env[1138]: time="2024-02-12T20:29:00.428535246Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:29:00.428670 env[1138]: time="2024-02-12T20:29:00.428651976Z" level=info msg="Start streaming server" Feb 12 20:29:00.429487 env[1138]: time="2024-02-12T20:29:00.429455755Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:29:00.430504 env[1138]: time="2024-02-12T20:29:00.430463921Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:29:00.430783 env[1138]: time="2024-02-12T20:29:00.430760777Z" level=info msg="containerd successfully booted in 0.264940s" Feb 12 20:29:00.430868 systemd[1]: Started containerd.service. Feb 12 20:29:00.440697 coreos-metadata[1100]: Feb 12 20:29:00.440 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 12 20:29:00.444034 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 20:29:00.444267 systemd[1]: Started systemd-hostnamed.service. Feb 12 20:29:00.444672 dbus-daemon[1101]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1154 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 20:29:00.450974 coreos-metadata[1100]: Feb 12 20:29:00.450 INFO Fetch failed with 404: resource not found Feb 12 20:29:00.451314 coreos-metadata[1100]: Feb 12 20:29:00.451 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 12 20:29:00.452634 coreos-metadata[1100]: Feb 12 20:29:00.452 INFO Fetch successful Feb 12 20:29:00.452894 coreos-metadata[1100]: Feb 12 20:29:00.452 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 12 20:29:00.458406 systemd[1]: Starting polkit.service... Feb 12 20:29:00.461627 coreos-metadata[1100]: Feb 12 20:29:00.461 INFO Fetch failed with 404: resource not found Feb 12 20:29:00.462086 coreos-metadata[1100]: Feb 12 20:29:00.461 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 12 20:29:00.469260 coreos-metadata[1100]: Feb 12 20:29:00.469 INFO Fetch failed with 404: resource not found Feb 12 20:29:00.469564 coreos-metadata[1100]: Feb 12 20:29:00.469 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 12 20:29:00.472667 coreos-metadata[1100]: Feb 12 20:29:00.472 INFO Fetch successful Feb 12 20:29:00.478905 unknown[1100]: wrote ssh authorized keys file for user: core Feb 12 20:29:00.524866 update-ssh-keys[1182]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:29:00.525859 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:29:00.562474 tar[1132]: ./vlan Feb 12 20:29:00.565120 polkitd[1181]: Started polkitd version 121 Feb 12 20:29:00.590248 polkitd[1181]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 20:29:00.590558 polkitd[1181]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 20:29:00.594172 polkitd[1181]: Finished loading, compiling and executing 2 rules Feb 12 20:29:00.595078 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 20:29:00.595567 polkitd[1181]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 20:29:00.595327 systemd[1]: Started polkit.service. Feb 12 20:29:00.633636 systemd-hostnamed[1154]: Hostname set to (transient) Feb 12 20:29:00.637244 systemd-resolved[1074]: System hostname changed to 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal'. Feb 12 20:29:00.738231 tar[1132]: ./host-device Feb 12 20:29:00.912501 tar[1132]: ./tuning Feb 12 20:29:01.075116 tar[1132]: ./vrf Feb 12 20:29:01.228783 tar[1132]: ./sbr Feb 12 20:29:01.381356 tar[1132]: ./tap Feb 12 20:29:01.564749 tar[1132]: ./dhcp Feb 12 20:29:01.836482 tar[1134]: linux-amd64/LICENSE Feb 12 20:29:01.837100 tar[1134]: linux-amd64/README.md Feb 12 20:29:01.857151 systemd[1]: Finished prepare-helm.service. Feb 12 20:29:01.940863 tar[1132]: ./static Feb 12 20:29:01.954432 systemd[1]: Finished prepare-critools.service. Feb 12 20:29:01.991285 tar[1132]: ./firewall Feb 12 20:29:02.058215 tar[1132]: ./macvlan Feb 12 20:29:02.137147 tar[1132]: ./dummy Feb 12 20:29:02.212158 tar[1132]: ./bridge Feb 12 20:29:02.305465 tar[1132]: ./ipvlan Feb 12 20:29:02.374968 tar[1132]: ./portmap Feb 12 20:29:02.431312 tar[1132]: ./host-local Feb 12 20:29:02.515459 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:29:04.501351 sshd_keygen[1128]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:29:04.555493 systemd[1]: Finished sshd-keygen.service. Feb 12 20:29:04.559317 locksmithd[1173]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:29:04.566540 systemd[1]: Starting issuegen.service... Feb 12 20:29:04.578539 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:29:04.578812 systemd[1]: Finished issuegen.service. Feb 12 20:29:04.588832 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:29:04.602815 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:29:04.613599 systemd[1]: Started getty@tty1.service. Feb 12 20:29:04.623469 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:29:04.632526 systemd[1]: Reached target getty.target. Feb 12 20:29:06.323336 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Feb 12 20:29:08.355963 kernel: loop0: detected capacity change from 0 to 2097152 Feb 12 20:29:08.391000 systemd-nspawn[1213]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Feb 12 20:29:08.391000 systemd-nspawn[1213]: Press ^] three times within 1s to kill container. Feb 12 20:29:08.407964 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:29:08.417786 systemd[1]: Created slice system-sshd.slice. Feb 12 20:29:08.428340 systemd[1]: Started sshd@0-10.128.0.46:22-147.75.109.163:57088.service. Feb 12 20:29:08.509687 systemd[1]: Started oem-gce.service. Feb 12 20:29:08.517593 systemd[1]: Reached target multi-user.target. Feb 12 20:29:08.528232 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:29:08.541421 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:29:08.541660 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:29:08.552269 systemd[1]: Startup finished in 1.064s (kernel) + 9.884s (initrd) + 16.841s (userspace) = 27.790s. Feb 12 20:29:08.619788 systemd-nspawn[1213]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 12 20:29:08.619788 systemd-nspawn[1213]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 12 20:29:08.620074 systemd-nspawn[1213]: + /usr/bin/google_instance_setup Feb 12 20:29:08.754578 sshd[1218]: Accepted publickey for core from 147.75.109.163 port 57088 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:29:08.759405 sshd[1218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:08.778832 systemd[1]: Created slice user-500.slice. Feb 12 20:29:08.780519 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:29:08.788058 systemd-logind[1122]: New session 1 of user core. Feb 12 20:29:08.798113 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:29:08.800968 systemd[1]: Starting user@500.service... Feb 12 20:29:08.818405 (systemd)[1224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:08.955471 systemd[1224]: Queued start job for default target default.target. Feb 12 20:29:08.957274 systemd[1224]: Reached target paths.target. Feb 12 20:29:08.957321 systemd[1224]: Reached target sockets.target. Feb 12 20:29:08.957344 systemd[1224]: Reached target timers.target. Feb 12 20:29:08.957365 systemd[1224]: Reached target basic.target. Feb 12 20:29:08.957524 systemd[1]: Started user@500.service. Feb 12 20:29:08.959058 systemd[1]: Started session-1.scope. Feb 12 20:29:08.963508 systemd[1224]: Reached target default.target. Feb 12 20:29:08.963844 systemd[1224]: Startup finished in 134ms. Feb 12 20:29:09.189244 systemd[1]: Started sshd@1-10.128.0.46:22-147.75.109.163:57090.service. Feb 12 20:29:09.450602 instance-setup[1222]: INFO Running google_set_multiqueue. Feb 12 20:29:09.467861 instance-setup[1222]: INFO Set channels for eth0 to 2. Feb 12 20:29:09.471854 instance-setup[1222]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 12 20:29:09.473405 instance-setup[1222]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 12 20:29:09.473968 instance-setup[1222]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 12 20:29:09.476136 instance-setup[1222]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 12 20:29:09.476556 instance-setup[1222]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 12 20:29:09.477736 instance-setup[1222]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 12 20:29:09.478181 instance-setup[1222]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 12 20:29:09.479754 instance-setup[1222]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 12 20:29:09.490298 sshd[1233]: Accepted publickey for core from 147.75.109.163 port 57090 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:29:09.491310 sshd[1233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:09.497256 instance-setup[1222]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 12 20:29:09.497673 instance-setup[1222]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 12 20:29:09.499902 systemd[1]: Started session-2.scope. Feb 12 20:29:09.502355 systemd-logind[1122]: New session 2 of user core. Feb 12 20:29:09.549554 systemd-nspawn[1213]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 12 20:29:09.715795 sshd[1233]: pam_unix(sshd:session): session closed for user core Feb 12 20:29:09.721464 systemd[1]: sshd@1-10.128.0.46:22-147.75.109.163:57090.service: Deactivated successfully. Feb 12 20:29:09.722663 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:29:09.725254 systemd-logind[1122]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:29:09.726866 systemd-logind[1122]: Removed session 2. Feb 12 20:29:09.760645 systemd[1]: Started sshd@2-10.128.0.46:22-147.75.109.163:57102.service. Feb 12 20:29:09.914354 startup-script[1266]: INFO Starting startup scripts. Feb 12 20:29:09.927731 startup-script[1266]: INFO No startup scripts found in metadata. Feb 12 20:29:09.927927 startup-script[1266]: INFO Finished running startup scripts. Feb 12 20:29:09.968776 systemd-nspawn[1213]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 12 20:29:09.968776 systemd-nspawn[1213]: + daemon_pids=() Feb 12 20:29:09.969076 systemd-nspawn[1213]: + for d in accounts clock_skew network Feb 12 20:29:09.969076 systemd-nspawn[1213]: + daemon_pids+=($!) Feb 12 20:29:09.969076 systemd-nspawn[1213]: + for d in accounts clock_skew network Feb 12 20:29:09.969076 systemd-nspawn[1213]: + daemon_pids+=($!) Feb 12 20:29:09.969076 systemd-nspawn[1213]: + for d in accounts clock_skew network Feb 12 20:29:09.969076 systemd-nspawn[1213]: + daemon_pids+=($!) Feb 12 20:29:09.969392 systemd-nspawn[1213]: + NOTIFY_SOCKET=/run/systemd/notify Feb 12 20:29:09.969392 systemd-nspawn[1213]: + /usr/bin/systemd-notify --ready Feb 12 20:29:09.970151 systemd-nspawn[1213]: + /usr/bin/google_clock_skew_daemon Feb 12 20:29:09.970151 systemd-nspawn[1213]: + /usr/bin/google_network_daemon Feb 12 20:29:09.970404 systemd-nspawn[1213]: + /usr/bin/google_accounts_daemon Feb 12 20:29:10.041258 systemd-nspawn[1213]: + wait -n 36 37 38 Feb 12 20:29:10.071344 sshd[1270]: Accepted publickey for core from 147.75.109.163 port 57102 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:29:10.073049 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:10.081964 systemd[1]: Started session-3.scope. Feb 12 20:29:10.083359 systemd-logind[1122]: New session 3 of user core. Feb 12 20:29:10.285020 sshd[1270]: pam_unix(sshd:session): session closed for user core Feb 12 20:29:10.290360 systemd[1]: sshd@2-10.128.0.46:22-147.75.109.163:57102.service: Deactivated successfully. Feb 12 20:29:10.291619 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:29:10.294409 systemd-logind[1122]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:29:10.296090 systemd-logind[1122]: Removed session 3. Feb 12 20:29:10.328884 systemd[1]: Started sshd@3-10.128.0.46:22-147.75.109.163:57108.service. Feb 12 20:29:10.637663 sshd[1282]: Accepted publickey for core from 147.75.109.163 port 57108 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:29:10.638649 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:10.646943 systemd[1]: Started session-4.scope. Feb 12 20:29:10.649202 systemd-logind[1122]: New session 4 of user core. Feb 12 20:29:10.653300 google-networking[1276]: INFO Starting Google Networking daemon. Feb 12 20:29:10.801149 google-clock-skew[1275]: INFO Starting Google Clock Skew daemon. Feb 12 20:29:10.817822 google-clock-skew[1275]: INFO Clock drift token has changed: 0. Feb 12 20:29:10.824124 systemd-nspawn[1213]: hwclock: Cannot access the Hardware Clock via any known method. Feb 12 20:29:10.824836 systemd-nspawn[1213]: hwclock: Use the --verbose option to see the details of our search for an access method. Feb 12 20:29:10.825882 google-clock-skew[1275]: WARNING Failed to sync system time with hardware clock. Feb 12 20:29:10.857442 sshd[1282]: pam_unix(sshd:session): session closed for user core Feb 12 20:29:10.859487 groupadd[1294]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 12 20:29:10.861891 systemd[1]: sshd@3-10.128.0.46:22-147.75.109.163:57108.service: Deactivated successfully. Feb 12 20:29:10.862833 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:29:10.864621 systemd-logind[1122]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:29:10.866324 systemd-logind[1122]: Removed session 4. Feb 12 20:29:10.868288 groupadd[1294]: group added to /etc/gshadow: name=google-sudoers Feb 12 20:29:10.874450 groupadd[1294]: new group: name=google-sudoers, GID=1000 Feb 12 20:29:10.892667 google-accounts[1274]: INFO Starting Google Accounts daemon. Feb 12 20:29:10.903831 systemd[1]: Started sshd@4-10.128.0.46:22-147.75.109.163:57116.service. Feb 12 20:29:10.926258 google-accounts[1274]: WARNING OS Login not installed. Feb 12 20:29:10.927645 google-accounts[1274]: INFO Creating a new user account for 0. Feb 12 20:29:10.934038 systemd-nspawn[1213]: useradd: invalid user name '0': use --badname to ignore Feb 12 20:29:10.934843 google-accounts[1274]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 12 20:29:11.194494 sshd[1303]: Accepted publickey for core from 147.75.109.163 port 57116 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:29:11.196000 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:29:11.202666 systemd[1]: Started session-5.scope. Feb 12 20:29:11.203503 systemd-logind[1122]: New session 5 of user core. Feb 12 20:29:11.395766 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:29:11.396204 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:29:12.312038 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:29:12.320999 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:29:12.321600 systemd[1]: Reached target network-online.target. Feb 12 20:29:12.323783 systemd[1]: Starting docker.service... Feb 12 20:29:12.377758 env[1325]: time="2024-02-12T20:29:12.377666100Z" level=info msg="Starting up" Feb 12 20:29:12.379391 env[1325]: time="2024-02-12T20:29:12.379333949Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:29:12.379391 env[1325]: time="2024-02-12T20:29:12.379361618Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:29:12.379575 env[1325]: time="2024-02-12T20:29:12.379406874Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:29:12.379575 env[1325]: time="2024-02-12T20:29:12.379424563Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:29:12.382305 env[1325]: time="2024-02-12T20:29:12.382244864Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:29:12.382305 env[1325]: time="2024-02-12T20:29:12.382280432Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:29:12.382305 env[1325]: time="2024-02-12T20:29:12.382303335Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:29:12.382538 env[1325]: time="2024-02-12T20:29:12.382317273Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:29:12.391055 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4012745813-merged.mount: Deactivated successfully. Feb 12 20:29:12.432424 env[1325]: time="2024-02-12T20:29:12.432374945Z" level=info msg="Loading containers: start." Feb 12 20:29:12.600943 kernel: Initializing XFRM netlink socket Feb 12 20:29:12.647951 env[1325]: time="2024-02-12T20:29:12.647891607Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:29:12.744020 systemd-networkd[1019]: docker0: Link UP Feb 12 20:29:12.763662 env[1325]: time="2024-02-12T20:29:12.763604785Z" level=info msg="Loading containers: done." Feb 12 20:29:12.781714 env[1325]: time="2024-02-12T20:29:12.781617159Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:29:12.782043 env[1325]: time="2024-02-12T20:29:12.781909528Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:29:12.782198 env[1325]: time="2024-02-12T20:29:12.782101350Z" level=info msg="Daemon has completed initialization" Feb 12 20:29:12.806492 systemd[1]: Started docker.service. Feb 12 20:29:12.819254 env[1325]: time="2024-02-12T20:29:12.818961431Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:29:12.844970 systemd[1]: Reloading. Feb 12 20:29:12.970787 /usr/lib/systemd/system-generators/torcx-generator[1466]: time="2024-02-12T20:29:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:29:12.970836 /usr/lib/systemd/system-generators/torcx-generator[1466]: time="2024-02-12T20:29:12Z" level=info msg="torcx already run" Feb 12 20:29:13.058999 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:29:13.059029 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:29:13.082655 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:29:13.227698 systemd[1]: Started kubelet.service. Feb 12 20:29:13.327147 kubelet[1506]: E0212 20:29:13.327061 1506 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:29:13.329813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:29:13.329996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:29:13.940526 env[1138]: time="2024-02-12T20:29:13.940450908Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 20:29:14.443183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834276648.mount: Deactivated successfully. Feb 12 20:29:16.426759 env[1138]: time="2024-02-12T20:29:16.426688831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:16.430338 env[1138]: time="2024-02-12T20:29:16.430287311Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:16.433176 env[1138]: time="2024-02-12T20:29:16.433101030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:16.436122 env[1138]: time="2024-02-12T20:29:16.436062057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:16.437588 env[1138]: time="2024-02-12T20:29:16.437538629Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 12 20:29:16.453170 env[1138]: time="2024-02-12T20:29:16.453103245Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 20:29:18.334441 env[1138]: time="2024-02-12T20:29:18.334360376Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:18.338096 env[1138]: time="2024-02-12T20:29:18.338041559Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:18.341262 env[1138]: time="2024-02-12T20:29:18.341192503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:18.344246 env[1138]: time="2024-02-12T20:29:18.344195547Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:18.345568 env[1138]: time="2024-02-12T20:29:18.345518302Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 12 20:29:18.361612 env[1138]: time="2024-02-12T20:29:18.361557984Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 20:29:19.792562 env[1138]: time="2024-02-12T20:29:19.792484140Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:19.795932 env[1138]: time="2024-02-12T20:29:19.795861997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:19.798855 env[1138]: time="2024-02-12T20:29:19.798793115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:19.802035 env[1138]: time="2024-02-12T20:29:19.801990864Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:19.803503 env[1138]: time="2024-02-12T20:29:19.803438369Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 12 20:29:19.818701 env[1138]: time="2024-02-12T20:29:19.818649076Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 20:29:20.784349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2410005368.mount: Deactivated successfully. Feb 12 20:29:21.462261 env[1138]: time="2024-02-12T20:29:21.462187041Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.465432 env[1138]: time="2024-02-12T20:29:21.465380328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.467991 env[1138]: time="2024-02-12T20:29:21.467941551Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.470477 env[1138]: time="2024-02-12T20:29:21.470431520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.471170 env[1138]: time="2024-02-12T20:29:21.471127470Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 20:29:21.486579 env[1138]: time="2024-02-12T20:29:21.486518028Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:29:21.835506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290209152.mount: Deactivated successfully. Feb 12 20:29:21.847544 env[1138]: time="2024-02-12T20:29:21.847469347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.850891 env[1138]: time="2024-02-12T20:29:21.850836033Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.853650 env[1138]: time="2024-02-12T20:29:21.853601204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.856477 env[1138]: time="2024-02-12T20:29:21.856425934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:21.857318 env[1138]: time="2024-02-12T20:29:21.857264641Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:29:21.871850 env[1138]: time="2024-02-12T20:29:21.871798192Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 20:29:22.626488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216470239.mount: Deactivated successfully. Feb 12 20:29:23.555279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:29:23.555593 systemd[1]: Stopped kubelet.service. Feb 12 20:29:23.558072 systemd[1]: Started kubelet.service. Feb 12 20:29:23.660419 kubelet[1550]: E0212 20:29:23.660351 1550 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:29:23.665514 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:29:23.665866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:29:26.965470 env[1138]: time="2024-02-12T20:29:26.965417209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:26.969060 env[1138]: time="2024-02-12T20:29:26.969002109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:26.971759 env[1138]: time="2024-02-12T20:29:26.971710115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:26.974529 env[1138]: time="2024-02-12T20:29:26.974480279Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:26.975504 env[1138]: time="2024-02-12T20:29:26.975450176Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 12 20:29:26.990113 env[1138]: time="2024-02-12T20:29:26.990062447Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 20:29:27.329830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195598382.mount: Deactivated successfully. Feb 12 20:29:28.123695 env[1138]: time="2024-02-12T20:29:28.123627704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:28.127506 env[1138]: time="2024-02-12T20:29:28.127445386Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:28.130389 env[1138]: time="2024-02-12T20:29:28.130331578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:28.133109 env[1138]: time="2024-02-12T20:29:28.133054972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:28.133845 env[1138]: time="2024-02-12T20:29:28.133802141Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 20:29:30.645602 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 20:29:32.727135 systemd[1]: Stopped kubelet.service. Feb 12 20:29:32.748801 systemd[1]: Reloading. Feb 12 20:29:32.844336 /usr/lib/systemd/system-generators/torcx-generator[1644]: time="2024-02-12T20:29:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:29:32.844388 /usr/lib/systemd/system-generators/torcx-generator[1644]: time="2024-02-12T20:29:32Z" level=info msg="torcx already run" Feb 12 20:29:32.957278 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:29:32.957305 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:29:32.980821 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:29:33.108273 systemd[1]: Started kubelet.service. Feb 12 20:29:33.181881 kubelet[1688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:29:33.181881 kubelet[1688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:29:33.181881 kubelet[1688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:29:33.182506 kubelet[1688]: I0212 20:29:33.181985 1688 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:29:33.711363 kubelet[1688]: I0212 20:29:33.711327 1688 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:29:33.711576 kubelet[1688]: I0212 20:29:33.711560 1688 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:29:33.711881 kubelet[1688]: I0212 20:29:33.711867 1688 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:29:33.717533 kubelet[1688]: E0212 20:29:33.717497 1688 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.717813 kubelet[1688]: I0212 20:29:33.717790 1688 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:29:33.805389 kubelet[1688]: I0212 20:29:33.805333 1688 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:29:33.805809 kubelet[1688]: I0212 20:29:33.805777 1688 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:29:33.805939 kubelet[1688]: I0212 20:29:33.805897 1688 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:29:33.805939 kubelet[1688]: I0212 20:29:33.805936 1688 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:29:33.806249 kubelet[1688]: I0212 20:29:33.805958 1688 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:29:33.806249 kubelet[1688]: I0212 20:29:33.806127 1688 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:29:33.810445 kubelet[1688]: I0212 20:29:33.810397 1688 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:29:33.810445 kubelet[1688]: I0212 20:29:33.810434 1688 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:29:33.810685 kubelet[1688]: I0212 20:29:33.810460 1688 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:29:33.810685 kubelet[1688]: I0212 20:29:33.810483 1688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:29:33.811850 kubelet[1688]: W0212 20:29:33.811526 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.811850 kubelet[1688]: E0212 20:29:33.811617 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.812345 kubelet[1688]: W0212 20:29:33.812290 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.812471 kubelet[1688]: E0212 20:29:33.812355 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.812547 kubelet[1688]: I0212 20:29:33.812473 1688 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:29:33.816099 kubelet[1688]: W0212 20:29:33.816065 1688 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:29:33.816831 kubelet[1688]: I0212 20:29:33.816799 1688 server.go:1168] "Started kubelet" Feb 12 20:29:33.829454 kubelet[1688]: E0212 20:29:33.829414 1688 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:29:33.829724 kubelet[1688]: E0212 20:29:33.829707 1688 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:29:33.833613 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:29:33.833801 kubelet[1688]: E0212 20:29:33.830390 1688 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal.17b3378fa67f5c96", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal", UID:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 29, 33, 816765590, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 29, 33, 816765590, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.128.0.46:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.46:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:29:33.835094 kubelet[1688]: I0212 20:29:33.834970 1688 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:29:33.835563 kubelet[1688]: I0212 20:29:33.835529 1688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:29:33.836162 kubelet[1688]: I0212 20:29:33.836127 1688 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:29:33.838051 kubelet[1688]: I0212 20:29:33.838027 1688 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:29:33.843170 kubelet[1688]: E0212 20:29:33.842339 1688 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" not found" Feb 12 20:29:33.843170 kubelet[1688]: I0212 20:29:33.842382 1688 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:29:33.843170 kubelet[1688]: I0212 20:29:33.842538 1688 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:29:33.843170 kubelet[1688]: W0212 20:29:33.843073 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.843170 kubelet[1688]: E0212 20:29:33.843151 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.844477 kubelet[1688]: E0212 20:29:33.844401 1688 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="200ms" Feb 12 20:29:33.885719 kubelet[1688]: I0212 20:29:33.885629 1688 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:29:33.885719 kubelet[1688]: I0212 20:29:33.885659 1688 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:29:33.885719 kubelet[1688]: I0212 20:29:33.885698 1688 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:29:33.887405 kubelet[1688]: I0212 20:29:33.887377 1688 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:29:33.889088 kubelet[1688]: I0212 20:29:33.889068 1688 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:29:33.889233 kubelet[1688]: I0212 20:29:33.889221 1688 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:29:33.889312 kubelet[1688]: I0212 20:29:33.889300 1688 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:29:33.889426 kubelet[1688]: E0212 20:29:33.889416 1688 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:29:33.898320 kubelet[1688]: W0212 20:29:33.898275 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.898510 kubelet[1688]: E0212 20:29:33.898497 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:33.938536 kubelet[1688]: E0212 20:29:33.938389 1688 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal.17b3378fa67f5c96", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal", UID:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 29, 33, 816765590, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 29, 33, 816765590, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.128.0.46:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.46:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:29:34.017032 kubelet[1688]: I0212 20:29:33.949680 1688 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.017032 kubelet[1688]: E0212 20:29:33.950121 1688 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.017032 kubelet[1688]: E0212 20:29:33.990359 1688 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 12 20:29:34.020975 kubelet[1688]: I0212 20:29:34.020939 1688 policy_none.go:49] "None policy: Start" Feb 12 20:29:34.022350 kubelet[1688]: I0212 20:29:34.022314 1688 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:29:34.022544 kubelet[1688]: I0212 20:29:34.022527 1688 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:29:34.045096 kubelet[1688]: E0212 20:29:34.045053 1688 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="400ms" Feb 12 20:29:34.158682 kubelet[1688]: I0212 20:29:34.158638 1688 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.159389 kubelet[1688]: E0212 20:29:34.159317 1688 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.191607 kubelet[1688]: E0212 20:29:34.191539 1688 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 12 20:29:34.236387 systemd[1]: Created slice kubepods.slice. Feb 12 20:29:34.243513 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:29:34.247876 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:29:34.254874 kubelet[1688]: I0212 20:29:34.254824 1688 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:29:34.255958 kubelet[1688]: I0212 20:29:34.255890 1688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:29:34.258393 kubelet[1688]: E0212 20:29:34.258369 1688 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" not found" Feb 12 20:29:34.445774 kubelet[1688]: E0212 20:29:34.445726 1688 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="800ms" Feb 12 20:29:34.564132 kubelet[1688]: I0212 20:29:34.564080 1688 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.564537 kubelet[1688]: E0212 20:29:34.564483 1688 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.592796 kubelet[1688]: I0212 20:29:34.592736 1688 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:34.600000 kubelet[1688]: I0212 20:29:34.599964 1688 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:34.605771 kubelet[1688]: I0212 20:29:34.605742 1688 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:34.612102 systemd[1]: Created slice kubepods-burstable-pod5aac4fbd9a30777abd19852f04d9daac.slice. Feb 12 20:29:34.633760 systemd[1]: Created slice kubepods-burstable-pod02cfa539b057cd2ec2c2b9117009061e.slice. Feb 12 20:29:34.646696 systemd[1]: Created slice kubepods-burstable-poda8ba3886f2c8002b952edc1aec9d2b49.slice. Feb 12 20:29:34.648868 kubelet[1688]: I0212 20:29:34.648647 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.648868 kubelet[1688]: I0212 20:29:34.648805 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650131 kubelet[1688]: I0212 20:29:34.649562 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650131 kubelet[1688]: I0212 20:29:34.649628 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650131 kubelet[1688]: I0212 20:29:34.649672 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650131 kubelet[1688]: I0212 20:29:34.649719 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8ba3886f2c8002b952edc1aec9d2b49-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"a8ba3886f2c8002b952edc1aec9d2b49\") " pod="kube-system/kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650485 kubelet[1688]: I0212 20:29:34.649758 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650485 kubelet[1688]: I0212 20:29:34.649794 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.650485 kubelet[1688]: I0212 20:29:34.649846 1688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:34.651692 kubelet[1688]: W0212 20:29:34.651659 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:34.651888 kubelet[1688]: E0212 20:29:34.651864 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:34.723693 kubelet[1688]: W0212 20:29:34.722492 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:34.723951 kubelet[1688]: E0212 20:29:34.723929 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:34.931128 env[1138]: time="2024-02-12T20:29:34.931048067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:5aac4fbd9a30777abd19852f04d9daac,Namespace:kube-system,Attempt:0,}" Feb 12 20:29:34.938858 env[1138]: time="2024-02-12T20:29:34.938779261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:02cfa539b057cd2ec2c2b9117009061e,Namespace:kube-system,Attempt:0,}" Feb 12 20:29:34.952343 env[1138]: time="2024-02-12T20:29:34.952283213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:a8ba3886f2c8002b952edc1aec9d2b49,Namespace:kube-system,Attempt:0,}" Feb 12 20:29:35.107661 kubelet[1688]: W0212 20:29:35.107578 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:35.107661 kubelet[1688]: E0212 20:29:35.107659 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:35.247244 kubelet[1688]: E0212 20:29:35.247201 1688 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="1.6s" Feb 12 20:29:35.316453 kubelet[1688]: W0212 20:29:35.316358 1688 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:35.316664 kubelet[1688]: E0212 20:29:35.316486 1688 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:35.370382 kubelet[1688]: I0212 20:29:35.370248 1688 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:35.371182 kubelet[1688]: E0212 20:29:35.371150 1688 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:35.883244 kubelet[1688]: E0212 20:29:35.883185 1688 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.46:6443: connect: connection refused Feb 12 20:29:36.057389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871444157.mount: Deactivated successfully. Feb 12 20:29:36.070834 env[1138]: time="2024-02-12T20:29:36.070769873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.072543 env[1138]: time="2024-02-12T20:29:36.072485282Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.078637 env[1138]: time="2024-02-12T20:29:36.078551343Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.080812 env[1138]: time="2024-02-12T20:29:36.080730056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.082117 env[1138]: time="2024-02-12T20:29:36.082061351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.085996 env[1138]: time="2024-02-12T20:29:36.085907417Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.087458 env[1138]: time="2024-02-12T20:29:36.087395133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.089725 env[1138]: time="2024-02-12T20:29:36.089659599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.090946 env[1138]: time="2024-02-12T20:29:36.090879636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.092260 env[1138]: time="2024-02-12T20:29:36.092219986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.094620 env[1138]: time="2024-02-12T20:29:36.094561199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.096141 env[1138]: time="2024-02-12T20:29:36.096098730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.178544920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.178624066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.178644533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.172165366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.172233784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.172254701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:29:36.179955 env[1138]: time="2024-02-12T20:29:36.172468470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9c174f2e7ac09c26369b2dde8c3e4f4f35e4fe078abba7174dabf64b9eaa3eb pid=1725 runtime=io.containerd.runc.v2 Feb 12 20:29:36.183496 env[1138]: time="2024-02-12T20:29:36.178837200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4027d0490996c13635088f3bb5a25e401da8616088dbbbd82d1fc6c4d9f4dc pid=1736 runtime=io.containerd.runc.v2 Feb 12 20:29:36.211517 systemd[1]: Started cri-containerd-d9c174f2e7ac09c26369b2dde8c3e4f4f35e4fe078abba7174dabf64b9eaa3eb.scope. Feb 12 20:29:36.242538 systemd[1]: Started cri-containerd-ef4027d0490996c13635088f3bb5a25e401da8616088dbbbd82d1fc6c4d9f4dc.scope. Feb 12 20:29:36.243172 env[1138]: time="2024-02-12T20:29:36.243071303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:29:36.243590 env[1138]: time="2024-02-12T20:29:36.243545475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:29:36.244140 env[1138]: time="2024-02-12T20:29:36.243758391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:29:36.246620 env[1138]: time="2024-02-12T20:29:36.246555891Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/399c87d04ec5e93a122d1f88688cf82f8d08ef43e0d998455d16c934114431d7 pid=1777 runtime=io.containerd.runc.v2 Feb 12 20:29:36.275319 systemd[1]: Started cri-containerd-399c87d04ec5e93a122d1f88688cf82f8d08ef43e0d998455d16c934114431d7.scope. Feb 12 20:29:36.345284 env[1138]: time="2024-02-12T20:29:36.345218546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:5aac4fbd9a30777abd19852f04d9daac,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c174f2e7ac09c26369b2dde8c3e4f4f35e4fe078abba7174dabf64b9eaa3eb\"" Feb 12 20:29:36.349836 kubelet[1688]: E0212 20:29:36.349776 1688 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-21291" Feb 12 20:29:36.353435 env[1138]: time="2024-02-12T20:29:36.353379096Z" level=info msg="CreateContainer within sandbox \"d9c174f2e7ac09c26369b2dde8c3e4f4f35e4fe078abba7174dabf64b9eaa3eb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:29:36.377699 env[1138]: time="2024-02-12T20:29:36.377628190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:02cfa539b057cd2ec2c2b9117009061e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef4027d0490996c13635088f3bb5a25e401da8616088dbbbd82d1fc6c4d9f4dc\"" Feb 12 20:29:36.381450 kubelet[1688]: E0212 20:29:36.379768 1688 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flat" Feb 12 20:29:36.383395 env[1138]: time="2024-02-12T20:29:36.383348060Z" level=info msg="CreateContainer within sandbox \"ef4027d0490996c13635088f3bb5a25e401da8616088dbbbd82d1fc6c4d9f4dc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:29:36.394659 env[1138]: time="2024-02-12T20:29:36.394600940Z" level=info msg="CreateContainer within sandbox \"d9c174f2e7ac09c26369b2dde8c3e4f4f35e4fe078abba7174dabf64b9eaa3eb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"50e8831c706903e069906846be6776b4c54d4485ad74b4a0f0eab01418709cd1\"" Feb 12 20:29:36.395799 env[1138]: time="2024-02-12T20:29:36.395761084Z" level=info msg="StartContainer for \"50e8831c706903e069906846be6776b4c54d4485ad74b4a0f0eab01418709cd1\"" Feb 12 20:29:36.405263 env[1138]: time="2024-02-12T20:29:36.405202024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal,Uid:a8ba3886f2c8002b952edc1aec9d2b49,Namespace:kube-system,Attempt:0,} returns sandbox id \"399c87d04ec5e93a122d1f88688cf82f8d08ef43e0d998455d16c934114431d7\"" Feb 12 20:29:36.408068 kubelet[1688]: E0212 20:29:36.407591 1688 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-21291" Feb 12 20:29:36.409522 env[1138]: time="2024-02-12T20:29:36.409473912Z" level=info msg="CreateContainer within sandbox \"399c87d04ec5e93a122d1f88688cf82f8d08ef43e0d998455d16c934114431d7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:29:36.418761 env[1138]: time="2024-02-12T20:29:36.418701837Z" level=info msg="CreateContainer within sandbox \"ef4027d0490996c13635088f3bb5a25e401da8616088dbbbd82d1fc6c4d9f4dc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4bcb545ff360fc9ff627435116beea6dc6fd6d24b4066826b873414e6f56f27e\"" Feb 12 20:29:36.420376 env[1138]: time="2024-02-12T20:29:36.420333591Z" level=info msg="StartContainer for \"4bcb545ff360fc9ff627435116beea6dc6fd6d24b4066826b873414e6f56f27e\"" Feb 12 20:29:36.431862 systemd[1]: Started cri-containerd-50e8831c706903e069906846be6776b4c54d4485ad74b4a0f0eab01418709cd1.scope. Feb 12 20:29:36.466537 env[1138]: time="2024-02-12T20:29:36.466477157Z" level=info msg="CreateContainer within sandbox \"399c87d04ec5e93a122d1f88688cf82f8d08ef43e0d998455d16c934114431d7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ba83ebb654a7b72e3ee7b13b781db0f678e334522badb62afc7d7fc0e734242\"" Feb 12 20:29:36.472996 env[1138]: time="2024-02-12T20:29:36.472776931Z" level=info msg="StartContainer for \"7ba83ebb654a7b72e3ee7b13b781db0f678e334522badb62afc7d7fc0e734242\"" Feb 12 20:29:36.500142 systemd[1]: Started cri-containerd-7ba83ebb654a7b72e3ee7b13b781db0f678e334522badb62afc7d7fc0e734242.scope. Feb 12 20:29:36.512249 systemd[1]: Started cri-containerd-4bcb545ff360fc9ff627435116beea6dc6fd6d24b4066826b873414e6f56f27e.scope. Feb 12 20:29:36.561403 env[1138]: time="2024-02-12T20:29:36.561345507Z" level=info msg="StartContainer for \"50e8831c706903e069906846be6776b4c54d4485ad74b4a0f0eab01418709cd1\" returns successfully" Feb 12 20:29:36.612953 env[1138]: time="2024-02-12T20:29:36.612878330Z" level=info msg="StartContainer for \"4bcb545ff360fc9ff627435116beea6dc6fd6d24b4066826b873414e6f56f27e\" returns successfully" Feb 12 20:29:36.714257 env[1138]: time="2024-02-12T20:29:36.714124047Z" level=info msg="StartContainer for \"7ba83ebb654a7b72e3ee7b13b781db0f678e334522badb62afc7d7fc0e734242\" returns successfully" Feb 12 20:29:36.976559 kubelet[1688]: I0212 20:29:36.976429 1688 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:41.241074 kubelet[1688]: I0212 20:29:41.241021 1688 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:41.813948 kubelet[1688]: I0212 20:29:41.813871 1688 apiserver.go:52] "Watching apiserver" Feb 12 20:29:41.843279 kubelet[1688]: I0212 20:29:41.843208 1688 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:29:41.902506 kubelet[1688]: I0212 20:29:41.902436 1688 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:29:43.744192 systemd[1]: Reloading. Feb 12 20:29:43.907981 /usr/lib/systemd/system-generators/torcx-generator[1982]: time="2024-02-12T20:29:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:29:43.908050 /usr/lib/systemd/system-generators/torcx-generator[1982]: time="2024-02-12T20:29:43Z" level=info msg="torcx already run" Feb 12 20:29:44.044115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:29:44.044143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:29:44.083023 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:29:44.251947 systemd[1]: Stopping kubelet.service... Feb 12 20:29:44.252579 kubelet[1688]: I0212 20:29:44.252406 1688 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:29:44.271805 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:29:44.272109 systemd[1]: Stopped kubelet.service. Feb 12 20:29:44.272199 systemd[1]: kubelet.service: Consumed 1.086s CPU time. Feb 12 20:29:44.274780 systemd[1]: Started kubelet.service. Feb 12 20:29:44.383136 kubelet[2023]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:29:44.383136 kubelet[2023]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:29:44.383136 kubelet[2023]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:29:44.383710 kubelet[2023]: I0212 20:29:44.383230 2023 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:29:44.406959 kubelet[2023]: I0212 20:29:44.403078 2023 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:29:44.406959 kubelet[2023]: I0212 20:29:44.403118 2023 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:29:44.406959 kubelet[2023]: I0212 20:29:44.403454 2023 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:29:44.406959 kubelet[2023]: I0212 20:29:44.406531 2023 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:29:44.409968 kubelet[2023]: I0212 20:29:44.408626 2023 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:29:44.410615 sudo[2033]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:29:44.411048 sudo[2033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:29:44.417388 kubelet[2023]: I0212 20:29:44.417243 2023 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:29:44.417963 kubelet[2023]: I0212 20:29:44.417575 2023 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:29:44.417963 kubelet[2023]: I0212 20:29:44.417697 2023 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:29:44.417963 kubelet[2023]: I0212 20:29:44.417718 2023 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:29:44.417963 kubelet[2023]: I0212 20:29:44.417735 2023 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:29:44.417963 kubelet[2023]: I0212 20:29:44.417785 2023 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:29:44.422781 kubelet[2023]: I0212 20:29:44.422749 2023 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:29:44.422781 kubelet[2023]: I0212 20:29:44.422786 2023 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:29:44.423028 kubelet[2023]: I0212 20:29:44.422817 2023 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:29:44.423028 kubelet[2023]: I0212 20:29:44.422840 2023 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:29:44.432600 kubelet[2023]: I0212 20:29:44.432561 2023 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:29:44.469953 kubelet[2023]: I0212 20:29:44.468448 2023 server.go:1168] "Started kubelet" Feb 12 20:29:44.477239 kubelet[2023]: I0212 20:29:44.476788 2023 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:29:44.478360 kubelet[2023]: I0212 20:29:44.478318 2023 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:29:44.479337 kubelet[2023]: I0212 20:29:44.479307 2023 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:29:44.483622 kubelet[2023]: E0212 20:29:44.483587 2023 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:29:44.483767 kubelet[2023]: E0212 20:29:44.483638 2023 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:29:44.498311 kubelet[2023]: I0212 20:29:44.498269 2023 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:29:44.519777 kubelet[2023]: I0212 20:29:44.519739 2023 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:29:44.519982 kubelet[2023]: I0212 20:29:44.519923 2023 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:29:44.568122 kubelet[2023]: I0212 20:29:44.568083 2023 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:29:44.576336 kubelet[2023]: I0212 20:29:44.576301 2023 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:29:44.576336 kubelet[2023]: I0212 20:29:44.576333 2023 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:29:44.589754 kubelet[2023]: I0212 20:29:44.589713 2023 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:29:44.589973 kubelet[2023]: E0212 20:29:44.589831 2023 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:29:44.638753 kubelet[2023]: I0212 20:29:44.638649 2023 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.688570 kubelet[2023]: I0212 20:29:44.688521 2023 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.688770 kubelet[2023]: I0212 20:29:44.688625 2023 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.690146 kubelet[2023]: E0212 20:29:44.690114 2023 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 12 20:29:44.727534 kubelet[2023]: I0212 20:29:44.727494 2023 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:29:44.727534 kubelet[2023]: I0212 20:29:44.727535 2023 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:29:44.727800 kubelet[2023]: I0212 20:29:44.727560 2023 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:29:44.727800 kubelet[2023]: I0212 20:29:44.727760 2023 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:29:44.727800 kubelet[2023]: I0212 20:29:44.727782 2023 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:29:44.727800 kubelet[2023]: I0212 20:29:44.727795 2023 policy_none.go:49] "None policy: Start" Feb 12 20:29:44.728869 kubelet[2023]: I0212 20:29:44.728839 2023 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:29:44.729007 kubelet[2023]: I0212 20:29:44.728878 2023 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:29:44.729117 kubelet[2023]: I0212 20:29:44.729094 2023 state_mem.go:75] "Updated machine memory state" Feb 12 20:29:44.739852 kubelet[2023]: I0212 20:29:44.739825 2023 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:29:44.740341 kubelet[2023]: I0212 20:29:44.740319 2023 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:29:44.765974 update_engine[1127]: I0212 20:29:44.762598 1127 update_attempter.cc:509] Updating boot flags... Feb 12 20:29:44.891817 kubelet[2023]: I0212 20:29:44.890319 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:44.891817 kubelet[2023]: I0212 20:29:44.890496 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:44.891817 kubelet[2023]: I0212 20:29:44.890578 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:44.918870 kubelet[2023]: W0212 20:29:44.917645 2023 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 12 20:29:44.918870 kubelet[2023]: W0212 20:29:44.918463 2023 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 12 20:29:44.918870 kubelet[2023]: W0212 20:29:44.918507 2023 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 12 20:29:44.943805 kubelet[2023]: I0212 20:29:44.942858 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.943805 kubelet[2023]: I0212 20:29:44.942949 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a8ba3886f2c8002b952edc1aec9d2b49-kubeconfig\") pod \"kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"a8ba3886f2c8002b952edc1aec9d2b49\") " pod="kube-system/kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.943805 kubelet[2023]: I0212 20:29:44.942994 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-ca-certs\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.943805 kubelet[2023]: I0212 20:29:44.943454 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.944193 kubelet[2023]: I0212 20:29:44.943508 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-k8s-certs\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.944193 kubelet[2023]: I0212 20:29:44.943545 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5aac4fbd9a30777abd19852f04d9daac-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"5aac4fbd9a30777abd19852f04d9daac\") " pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.944193 kubelet[2023]: I0212 20:29:44.943580 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.944193 kubelet[2023]: I0212 20:29:44.943634 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:44.944412 kubelet[2023]: I0212 20:29:44.943702 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02cfa539b057cd2ec2c2b9117009061e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" (UID: \"02cfa539b057cd2ec2c2b9117009061e\") " pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:45.439519 kubelet[2023]: I0212 20:29:45.439472 2023 apiserver.go:52] "Watching apiserver" Feb 12 20:29:45.462046 sudo[2033]: pam_unix(sudo:session): session closed for user root Feb 12 20:29:45.520957 kubelet[2023]: I0212 20:29:45.520892 2023 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:29:45.548251 kubelet[2023]: I0212 20:29:45.548199 2023 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:29:45.669926 kubelet[2023]: W0212 20:29:45.669871 2023 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 12 20:29:45.670138 kubelet[2023]: E0212 20:29:45.669975 2023 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" Feb 12 20:29:45.707504 kubelet[2023]: I0212 20:29:45.707358 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" podStartSLOduration=1.707274242 podCreationTimestamp="2024-02-12 20:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:29:45.704845336 +0000 UTC m=+1.424563699" watchObservedRunningTime="2024-02-12 20:29:45.707274242 +0000 UTC m=+1.426992602" Feb 12 20:29:45.707753 kubelet[2023]: I0212 20:29:45.707544 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" podStartSLOduration=1.707496496 podCreationTimestamp="2024-02-12 20:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:29:45.693955439 +0000 UTC m=+1.413673804" watchObservedRunningTime="2024-02-12 20:29:45.707496496 +0000 UTC m=+1.427214858" Feb 12 20:29:45.738716 kubelet[2023]: I0212 20:29:45.738665 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" podStartSLOduration=1.738598538 podCreationTimestamp="2024-02-12 20:29:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:29:45.719349433 +0000 UTC m=+1.439067790" watchObservedRunningTime="2024-02-12 20:29:45.738598538 +0000 UTC m=+1.458316900" Feb 12 20:29:46.916324 sudo[1309]: pam_unix(sudo:session): session closed for user root Feb 12 20:29:46.959049 sshd[1303]: pam_unix(sshd:session): session closed for user core Feb 12 20:29:46.963643 systemd[1]: sshd@4-10.128.0.46:22-147.75.109.163:57116.service: Deactivated successfully. Feb 12 20:29:46.964826 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:29:46.965115 systemd[1]: session-5.scope: Consumed 7.041s CPU time. Feb 12 20:29:46.965982 systemd-logind[1122]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:29:46.967283 systemd-logind[1122]: Removed session 5. Feb 12 20:29:57.709089 kubelet[2023]: I0212 20:29:57.709049 2023 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:29:57.710106 env[1138]: time="2024-02-12T20:29:57.710043340Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:29:57.710584 kubelet[2023]: I0212 20:29:57.710357 2023 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:29:58.305610 kubelet[2023]: I0212 20:29:58.305568 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:58.314528 systemd[1]: Created slice kubepods-besteffort-pod6d547cb5_e9c5_4fb8_820a_3ae94a1e3c8a.slice. Feb 12 20:29:58.321805 kubelet[2023]: W0212 20:29:58.321766 2023 reflector.go:533] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:29:58.322116 kubelet[2023]: E0212 20:29:58.322090 2023 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:29:58.322345 kubelet[2023]: W0212 20:29:58.322323 2023 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:29:58.322514 kubelet[2023]: E0212 20:29:58.322495 2023 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:29:58.330377 kubelet[2023]: I0212 20:29:58.330337 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:58.338301 systemd[1]: Created slice kubepods-burstable-pode60b583c_38a7_4213_a7ee_a6208be24fe2.slice. Feb 12 20:29:58.426127 kubelet[2023]: I0212 20:29:58.426074 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-lib-modules\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426143 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-config-path\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426177 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-run\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426206 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-bpf-maps\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426238 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-hubble-tls\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426272 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7xfn\" (UniqueName: \"kubernetes.io/projected/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-kube-api-access-c7xfn\") pod \"kube-proxy-kpm5j\" (UID: \"6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a\") " pod="kube-system/kube-proxy-kpm5j" Feb 12 20:29:58.426361 kubelet[2023]: I0212 20:29:58.426302 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-lib-modules\") pod \"kube-proxy-kpm5j\" (UID: \"6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a\") " pod="kube-system/kube-proxy-kpm5j" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426336 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-hostproc\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426368 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cni-path\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426400 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-xtables-lock\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426440 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkm8s\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426478 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-xtables-lock\") pod \"kube-proxy-kpm5j\" (UID: \"6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a\") " pod="kube-system/kube-proxy-kpm5j" Feb 12 20:29:58.426715 kubelet[2023]: I0212 20:29:58.426523 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-etc-cni-netd\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.427098 kubelet[2023]: I0212 20:29:58.426558 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-cgroup\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.427098 kubelet[2023]: I0212 20:29:58.426594 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-kube-proxy\") pod \"kube-proxy-kpm5j\" (UID: \"6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a\") " pod="kube-system/kube-proxy-kpm5j" Feb 12 20:29:58.427098 kubelet[2023]: I0212 20:29:58.426630 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e60b583c-38a7-4213-a7ee-a6208be24fe2-clustermesh-secrets\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.427098 kubelet[2023]: I0212 20:29:58.426670 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-net\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.427098 kubelet[2023]: I0212 20:29:58.426710 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-kernel\") pod \"cilium-fx8cs\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " pod="kube-system/cilium-fx8cs" Feb 12 20:29:58.646299 kubelet[2023]: I0212 20:29:58.646159 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:29:58.656993 systemd[1]: Created slice kubepods-besteffort-pod57935086_1e22_408a_8244_b62b48f4fd0b.slice. Feb 12 20:29:58.729862 kubelet[2023]: I0212 20:29:58.729809 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcs9s\" (UniqueName: \"kubernetes.io/projected/57935086-1e22-408a-8244-b62b48f4fd0b-kube-api-access-mcs9s\") pod \"cilium-operator-574c4bb98d-gwqxn\" (UID: \"57935086-1e22-408a-8244-b62b48f4fd0b\") " pod="kube-system/cilium-operator-574c4bb98d-gwqxn" Feb 12 20:29:58.730596 kubelet[2023]: I0212 20:29:58.730570 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57935086-1e22-408a-8244-b62b48f4fd0b-cilium-config-path\") pod \"cilium-operator-574c4bb98d-gwqxn\" (UID: \"57935086-1e22-408a-8244-b62b48f4fd0b\") " pod="kube-system/cilium-operator-574c4bb98d-gwqxn" Feb 12 20:29:59.564101 kubelet[2023]: E0212 20:29:59.564043 2023 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.564101 kubelet[2023]: E0212 20:29:59.564091 2023 projected.go:198] Error preparing data for projected volume kube-api-access-c7xfn for pod kube-system/kube-proxy-kpm5j: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.564421 kubelet[2023]: E0212 20:29:59.564195 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-kube-api-access-c7xfn podName:6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a nodeName:}" failed. No retries permitted until 2024-02-12 20:30:00.064165263 +0000 UTC m=+15.783883617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c7xfn" (UniqueName: "kubernetes.io/projected/6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a-kube-api-access-c7xfn") pod "kube-proxy-kpm5j" (UID: "6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.567192 kubelet[2023]: E0212 20:29:59.567156 2023 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.567426 kubelet[2023]: E0212 20:29:59.567405 2023 projected.go:198] Error preparing data for projected volume kube-api-access-pkm8s for pod kube-system/cilium-fx8cs: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.567619 kubelet[2023]: E0212 20:29:59.567596 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s podName:e60b583c-38a7-4213-a7ee-a6208be24fe2 nodeName:}" failed. No retries permitted until 2024-02-12 20:30:00.067572244 +0000 UTC m=+15.787290601 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pkm8s" (UniqueName: "kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s") pod "cilium-fx8cs" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:29:59.865674 env[1138]: time="2024-02-12T20:29:59.865535179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gwqxn,Uid:57935086-1e22-408a-8244-b62b48f4fd0b,Namespace:kube-system,Attempt:0,}" Feb 12 20:29:59.897351 env[1138]: time="2024-02-12T20:29:59.897248045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:29:59.897351 env[1138]: time="2024-02-12T20:29:59.897312499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:29:59.897692 env[1138]: time="2024-02-12T20:29:59.897329456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:29:59.898134 env[1138]: time="2024-02-12T20:29:59.898061729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae pid=2123 runtime=io.containerd.runc.v2 Feb 12 20:29:59.924641 systemd[1]: Started cri-containerd-f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae.scope. Feb 12 20:29:59.983455 env[1138]: time="2024-02-12T20:29:59.983381176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-gwqxn,Uid:57935086-1e22-408a-8244-b62b48f4fd0b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\"" Feb 12 20:29:59.989347 kubelet[2023]: E0212 20:29:59.989303 2023 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Feb 12 20:29:59.989962 env[1138]: time="2024-02-12T20:29:59.989833648Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:30:00.426418 env[1138]: time="2024-02-12T20:30:00.426176837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpm5j,Uid:6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a,Namespace:kube-system,Attempt:0,}" Feb 12 20:30:00.448596 env[1138]: time="2024-02-12T20:30:00.448528961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx8cs,Uid:e60b583c-38a7-4213-a7ee-a6208be24fe2,Namespace:kube-system,Attempt:0,}" Feb 12 20:30:00.460379 env[1138]: time="2024-02-12T20:30:00.460270025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:30:00.460379 env[1138]: time="2024-02-12T20:30:00.460339188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:30:00.460873 env[1138]: time="2024-02-12T20:30:00.460357993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:30:00.461379 env[1138]: time="2024-02-12T20:30:00.461315394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d88be7a6dbaa2287d03b1d69dc9c157046dd9e236bba17e365d872523342bd9 pid=2168 runtime=io.containerd.runc.v2 Feb 12 20:30:00.487999 systemd[1]: Started cri-containerd-0d88be7a6dbaa2287d03b1d69dc9c157046dd9e236bba17e365d872523342bd9.scope. Feb 12 20:30:00.515087 env[1138]: time="2024-02-12T20:30:00.514955258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:30:00.515280 env[1138]: time="2024-02-12T20:30:00.515090010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:30:00.515280 env[1138]: time="2024-02-12T20:30:00.515132599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:30:00.515559 env[1138]: time="2024-02-12T20:30:00.515499949Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88 pid=2201 runtime=io.containerd.runc.v2 Feb 12 20:30:00.548899 env[1138]: time="2024-02-12T20:30:00.548829962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpm5j,Uid:6d547cb5-e9c5-4fb8-820a-3ae94a1e3c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d88be7a6dbaa2287d03b1d69dc9c157046dd9e236bba17e365d872523342bd9\"" Feb 12 20:30:00.558158 systemd[1]: Started cri-containerd-eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88.scope. Feb 12 20:30:00.558932 env[1138]: time="2024-02-12T20:30:00.558670872Z" level=info msg="CreateContainer within sandbox \"0d88be7a6dbaa2287d03b1d69dc9c157046dd9e236bba17e365d872523342bd9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:30:00.628451 env[1138]: time="2024-02-12T20:30:00.628372172Z" level=info msg="CreateContainer within sandbox \"0d88be7a6dbaa2287d03b1d69dc9c157046dd9e236bba17e365d872523342bd9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72808cbc00e14fc4da2814a89b38ae2148ffcd1897dba43a52d5b945f3dddbcf\"" Feb 12 20:30:00.632576 env[1138]: time="2024-02-12T20:30:00.630192693Z" level=info msg="StartContainer for \"72808cbc00e14fc4da2814a89b38ae2148ffcd1897dba43a52d5b945f3dddbcf\"" Feb 12 20:30:00.660073 env[1138]: time="2024-02-12T20:30:00.660011368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx8cs,Uid:e60b583c-38a7-4213-a7ee-a6208be24fe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\"" Feb 12 20:30:00.690051 systemd[1]: Started cri-containerd-72808cbc00e14fc4da2814a89b38ae2148ffcd1897dba43a52d5b945f3dddbcf.scope. Feb 12 20:30:00.750836 env[1138]: time="2024-02-12T20:30:00.750696902Z" level=info msg="StartContainer for \"72808cbc00e14fc4da2814a89b38ae2148ffcd1897dba43a52d5b945f3dddbcf\" returns successfully" Feb 12 20:30:00.978215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1012092206.mount: Deactivated successfully. Feb 12 20:30:02.042658 env[1138]: time="2024-02-12T20:30:02.042560661Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:02.047355 env[1138]: time="2024-02-12T20:30:02.047237560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:02.051323 env[1138]: time="2024-02-12T20:30:02.051262395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:02.052193 env[1138]: time="2024-02-12T20:30:02.052140011Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:30:02.056945 env[1138]: time="2024-02-12T20:30:02.056234662Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:30:02.059282 env[1138]: time="2024-02-12T20:30:02.059209446Z" level=info msg="CreateContainer within sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:30:02.090288 env[1138]: time="2024-02-12T20:30:02.090226079Z" level=info msg="CreateContainer within sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\"" Feb 12 20:30:02.094001 env[1138]: time="2024-02-12T20:30:02.093936299Z" level=info msg="StartContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\"" Feb 12 20:30:02.126666 systemd[1]: Started cri-containerd-186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9.scope. Feb 12 20:30:02.188357 env[1138]: time="2024-02-12T20:30:02.188266166Z" level=info msg="StartContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" returns successfully" Feb 12 20:30:02.755269 kubelet[2023]: I0212 20:30:02.754566 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kpm5j" podStartSLOduration=4.754498051 podCreationTimestamp="2024-02-12 20:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:30:01.719032522 +0000 UTC m=+17.438750883" watchObservedRunningTime="2024-02-12 20:30:02.754498051 +0000 UTC m=+18.474216412" Feb 12 20:30:04.596610 kubelet[2023]: I0212 20:30:04.596561 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-gwqxn" podStartSLOduration=4.529033383 podCreationTimestamp="2024-02-12 20:29:58 +0000 UTC" firstStartedPulling="2024-02-12 20:29:59.985271445 +0000 UTC m=+15.704989795" lastFinishedPulling="2024-02-12 20:30:02.052745286 +0000 UTC m=+17.772463639" observedRunningTime="2024-02-12 20:30:02.755207721 +0000 UTC m=+18.474926088" watchObservedRunningTime="2024-02-12 20:30:04.596507227 +0000 UTC m=+20.316225589" Feb 12 20:30:08.568125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1765113205.mount: Deactivated successfully. Feb 12 20:30:12.047774 env[1138]: time="2024-02-12T20:30:12.047689825Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:12.051044 env[1138]: time="2024-02-12T20:30:12.050992130Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:12.053871 env[1138]: time="2024-02-12T20:30:12.053801460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:30:12.054950 env[1138]: time="2024-02-12T20:30:12.054881522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:30:12.062967 env[1138]: time="2024-02-12T20:30:12.062888480Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:30:12.087204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617790928.mount: Deactivated successfully. Feb 12 20:30:12.098026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003558979.mount: Deactivated successfully. Feb 12 20:30:12.103206 env[1138]: time="2024-02-12T20:30:12.103139638Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\"" Feb 12 20:30:12.106309 env[1138]: time="2024-02-12T20:30:12.106259035Z" level=info msg="StartContainer for \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\"" Feb 12 20:30:12.141686 systemd[1]: Started cri-containerd-f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c.scope. Feb 12 20:30:12.192101 env[1138]: time="2024-02-12T20:30:12.192034626Z" level=info msg="StartContainer for \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\" returns successfully" Feb 12 20:30:12.207801 systemd[1]: cri-containerd-f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c.scope: Deactivated successfully. Feb 12 20:30:13.081567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c-rootfs.mount: Deactivated successfully. Feb 12 20:30:14.287816 env[1138]: time="2024-02-12T20:30:14.287742883Z" level=info msg="shim disconnected" id=f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c Feb 12 20:30:14.287816 env[1138]: time="2024-02-12T20:30:14.287814328Z" level=warning msg="cleaning up after shim disconnected" id=f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c namespace=k8s.io Feb 12 20:30:14.287816 env[1138]: time="2024-02-12T20:30:14.287828806Z" level=info msg="cleaning up dead shim" Feb 12 20:30:14.299942 env[1138]: time="2024-02-12T20:30:14.299867510Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2478 runtime=io.containerd.runc.v2\n" Feb 12 20:30:14.750710 env[1138]: time="2024-02-12T20:30:14.750621304Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:30:14.772263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4212434624.mount: Deactivated successfully. Feb 12 20:30:14.786021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2147354715.mount: Deactivated successfully. Feb 12 20:30:14.800124 env[1138]: time="2024-02-12T20:30:14.800030454Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\"" Feb 12 20:30:14.800946 env[1138]: time="2024-02-12T20:30:14.800888706Z" level=info msg="StartContainer for \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\"" Feb 12 20:30:14.831825 systemd[1]: Started cri-containerd-acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8.scope. Feb 12 20:30:14.880952 env[1138]: time="2024-02-12T20:30:14.878651659Z" level=info msg="StartContainer for \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\" returns successfully" Feb 12 20:30:14.899617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:30:14.900150 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:30:14.900399 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:30:14.907976 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:30:14.912897 systemd[1]: cri-containerd-acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8.scope: Deactivated successfully. Feb 12 20:30:14.925870 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:30:14.970787 env[1138]: time="2024-02-12T20:30:14.970675849Z" level=info msg="shim disconnected" id=acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8 Feb 12 20:30:14.971139 env[1138]: time="2024-02-12T20:30:14.970787731Z" level=warning msg="cleaning up after shim disconnected" id=acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8 namespace=k8s.io Feb 12 20:30:14.971139 env[1138]: time="2024-02-12T20:30:14.970808524Z" level=info msg="cleaning up dead shim" Feb 12 20:30:14.990702 env[1138]: time="2024-02-12T20:30:14.990630091Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n" Feb 12 20:30:15.761945 env[1138]: time="2024-02-12T20:30:15.758377026Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:30:15.770062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8-rootfs.mount: Deactivated successfully. Feb 12 20:30:15.800461 env[1138]: time="2024-02-12T20:30:15.800396121Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\"" Feb 12 20:30:15.801343 env[1138]: time="2024-02-12T20:30:15.801291901Z" level=info msg="StartContainer for \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\"" Feb 12 20:30:15.842876 systemd[1]: Started cri-containerd-cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa.scope. Feb 12 20:30:15.892019 env[1138]: time="2024-02-12T20:30:15.891960125Z" level=info msg="StartContainer for \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\" returns successfully" Feb 12 20:30:15.895958 systemd[1]: cri-containerd-cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa.scope: Deactivated successfully. Feb 12 20:30:15.932003 env[1138]: time="2024-02-12T20:30:15.931867601Z" level=info msg="shim disconnected" id=cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa Feb 12 20:30:15.932003 env[1138]: time="2024-02-12T20:30:15.931987862Z" level=warning msg="cleaning up after shim disconnected" id=cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa namespace=k8s.io Feb 12 20:30:15.932003 env[1138]: time="2024-02-12T20:30:15.932007014Z" level=info msg="cleaning up dead shim" Feb 12 20:30:15.944178 env[1138]: time="2024-02-12T20:30:15.944096638Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2607 runtime=io.containerd.runc.v2\n" Feb 12 20:30:16.760489 env[1138]: time="2024-02-12T20:30:16.760437140Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:30:16.769900 systemd[1]: run-containerd-runc-k8s.io-cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa-runc.GkiFJx.mount: Deactivated successfully. Feb 12 20:30:16.770070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa-rootfs.mount: Deactivated successfully. Feb 12 20:30:16.791663 env[1138]: time="2024-02-12T20:30:16.791568739Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\"" Feb 12 20:30:16.793408 env[1138]: time="2024-02-12T20:30:16.793344034Z" level=info msg="StartContainer for \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\"" Feb 12 20:30:16.841369 systemd[1]: Started cri-containerd-2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e.scope. Feb 12 20:30:16.886537 systemd[1]: cri-containerd-2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e.scope: Deactivated successfully. Feb 12 20:30:16.888655 env[1138]: time="2024-02-12T20:30:16.888199178Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode60b583c_38a7_4213_a7ee_a6208be24fe2.slice/cri-containerd-2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e.scope/memory.events\": no such file or directory" Feb 12 20:30:16.892324 env[1138]: time="2024-02-12T20:30:16.892266952Z" level=info msg="StartContainer for \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\" returns successfully" Feb 12 20:30:16.927693 env[1138]: time="2024-02-12T20:30:16.927611057Z" level=info msg="shim disconnected" id=2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e Feb 12 20:30:16.927693 env[1138]: time="2024-02-12T20:30:16.927677677Z" level=warning msg="cleaning up after shim disconnected" id=2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e namespace=k8s.io Feb 12 20:30:16.927693 env[1138]: time="2024-02-12T20:30:16.927693606Z" level=info msg="cleaning up dead shim" Feb 12 20:30:16.952477 env[1138]: time="2024-02-12T20:30:16.952401124Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:30:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2663 runtime=io.containerd.runc.v2\n" Feb 12 20:30:17.771378 env[1138]: time="2024-02-12T20:30:17.767978241Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:30:17.769927 systemd[1]: run-containerd-runc-k8s.io-2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e-runc.bC3HGw.mount: Deactivated successfully. Feb 12 20:30:17.770108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e-rootfs.mount: Deactivated successfully. Feb 12 20:30:17.807416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371771191.mount: Deactivated successfully. Feb 12 20:30:17.816970 env[1138]: time="2024-02-12T20:30:17.816857005Z" level=info msg="CreateContainer within sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\"" Feb 12 20:30:17.819690 env[1138]: time="2024-02-12T20:30:17.817861719Z" level=info msg="StartContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\"" Feb 12 20:30:17.850311 systemd[1]: Started cri-containerd-27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe.scope. Feb 12 20:30:17.907723 env[1138]: time="2024-02-12T20:30:17.907617647Z" level=info msg="StartContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" returns successfully" Feb 12 20:30:18.093208 kubelet[2023]: I0212 20:30:18.093147 2023 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:30:18.123237 kubelet[2023]: I0212 20:30:18.123186 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:30:18.130384 kubelet[2023]: I0212 20:30:18.130342 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:30:18.132157 systemd[1]: Created slice kubepods-burstable-pod9d0140ca_bec5_4a40_aec4_516336896947.slice. Feb 12 20:30:18.153128 systemd[1]: Created slice kubepods-burstable-pod151e2c1c_2d28_44af_825b_4a6d7dd2d2b2.slice. Feb 12 20:30:18.195769 kubelet[2023]: I0212 20:30:18.195723 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/151e2c1c-2d28-44af-825b-4a6d7dd2d2b2-config-volume\") pod \"coredns-5d78c9869d-qjzrn\" (UID: \"151e2c1c-2d28-44af-825b-4a6d7dd2d2b2\") " pod="kube-system/coredns-5d78c9869d-qjzrn" Feb 12 20:30:18.196073 kubelet[2023]: I0212 20:30:18.195847 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d0140ca-bec5-4a40-aec4-516336896947-config-volume\") pod \"coredns-5d78c9869d-pp5np\" (UID: \"9d0140ca-bec5-4a40-aec4-516336896947\") " pod="kube-system/coredns-5d78c9869d-pp5np" Feb 12 20:30:18.196073 kubelet[2023]: I0212 20:30:18.195963 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mq4\" (UniqueName: \"kubernetes.io/projected/9d0140ca-bec5-4a40-aec4-516336896947-kube-api-access-89mq4\") pod \"coredns-5d78c9869d-pp5np\" (UID: \"9d0140ca-bec5-4a40-aec4-516336896947\") " pod="kube-system/coredns-5d78c9869d-pp5np" Feb 12 20:30:18.196073 kubelet[2023]: I0212 20:30:18.196009 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzcb9\" (UniqueName: \"kubernetes.io/projected/151e2c1c-2d28-44af-825b-4a6d7dd2d2b2-kube-api-access-hzcb9\") pod \"coredns-5d78c9869d-qjzrn\" (UID: \"151e2c1c-2d28-44af-825b-4a6d7dd2d2b2\") " pod="kube-system/coredns-5d78c9869d-qjzrn" Feb 12 20:30:18.451064 env[1138]: time="2024-02-12T20:30:18.450895359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-pp5np,Uid:9d0140ca-bec5-4a40-aec4-516336896947,Namespace:kube-system,Attempt:0,}" Feb 12 20:30:18.462057 env[1138]: time="2024-02-12T20:30:18.462000820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qjzrn,Uid:151e2c1c-2d28-44af-825b-4a6d7dd2d2b2,Namespace:kube-system,Attempt:0,}" Feb 12 20:30:20.194770 systemd-networkd[1019]: cilium_host: Link UP Feb 12 20:30:20.198481 systemd-networkd[1019]: cilium_net: Link UP Feb 12 20:30:20.206982 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:30:20.207291 systemd-networkd[1019]: cilium_net: Gained carrier Feb 12 20:30:20.215033 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:30:20.217368 systemd-networkd[1019]: cilium_host: Gained carrier Feb 12 20:30:20.248140 systemd-networkd[1019]: cilium_host: Gained IPv6LL Feb 12 20:30:20.361694 systemd-networkd[1019]: cilium_vxlan: Link UP Feb 12 20:30:20.361711 systemd-networkd[1019]: cilium_vxlan: Gained carrier Feb 12 20:30:20.644946 kernel: NET: Registered PF_ALG protocol family Feb 12 20:30:20.997127 systemd-networkd[1019]: cilium_net: Gained IPv6LL Feb 12 20:30:21.493432 systemd-networkd[1019]: lxc_health: Link UP Feb 12 20:30:21.514880 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:30:21.514347 systemd-networkd[1019]: lxc_health: Gained carrier Feb 12 20:30:22.014247 systemd-networkd[1019]: lxc435f792ca1d1: Link UP Feb 12 20:30:22.024974 kernel: eth0: renamed from tmpf6a65 Feb 12 20:30:22.053941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc435f792ca1d1: link becomes ready Feb 12 20:30:22.054676 systemd-networkd[1019]: lxc435f792ca1d1: Gained carrier Feb 12 20:30:22.057712 systemd-networkd[1019]: lxceb4e451b28e0: Link UP Feb 12 20:30:22.067949 kernel: eth0: renamed from tmpdc22a Feb 12 20:30:22.086973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb4e451b28e0: link becomes ready Feb 12 20:30:22.088552 systemd-networkd[1019]: lxceb4e451b28e0: Gained carrier Feb 12 20:30:22.088846 systemd-networkd[1019]: cilium_vxlan: Gained IPv6LL Feb 12 20:30:22.484583 kubelet[2023]: I0212 20:30:22.484546 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fx8cs" podStartSLOduration=13.091455202 podCreationTimestamp="2024-02-12 20:29:58 +0000 UTC" firstStartedPulling="2024-02-12 20:30:00.662453492 +0000 UTC m=+16.382171832" lastFinishedPulling="2024-02-12 20:30:12.055464303 +0000 UTC m=+27.775182645" observedRunningTime="2024-02-12 20:30:18.805375345 +0000 UTC m=+34.525093706" watchObservedRunningTime="2024-02-12 20:30:22.484466015 +0000 UTC m=+38.204184587" Feb 12 20:30:22.597594 systemd-networkd[1019]: lxc_health: Gained IPv6LL Feb 12 20:30:23.173816 systemd-networkd[1019]: lxceb4e451b28e0: Gained IPv6LL Feb 12 20:30:23.366683 systemd-networkd[1019]: lxc435f792ca1d1: Gained IPv6LL Feb 12 20:30:27.242753 env[1138]: time="2024-02-12T20:30:27.241444985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:30:27.242753 env[1138]: time="2024-02-12T20:30:27.241555809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:30:27.242753 env[1138]: time="2024-02-12T20:30:27.241599166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:30:27.242753 env[1138]: time="2024-02-12T20:30:27.241861124Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e pid=3210 runtime=io.containerd.runc.v2 Feb 12 20:30:27.260685 env[1138]: time="2024-02-12T20:30:27.260476254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:30:27.260685 env[1138]: time="2024-02-12T20:30:27.260537752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:30:27.260685 env[1138]: time="2024-02-12T20:30:27.260557169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:30:27.281949 env[1138]: time="2024-02-12T20:30:27.263503224Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e pid=3220 runtime=io.containerd.runc.v2 Feb 12 20:30:27.288560 systemd[1]: run-containerd-runc-k8s.io-f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e-runc.fsWsbP.mount: Deactivated successfully. Feb 12 20:30:27.317208 systemd[1]: Started cri-containerd-dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e.scope. Feb 12 20:30:27.319675 systemd[1]: Started cri-containerd-f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e.scope. Feb 12 20:30:27.399678 env[1138]: time="2024-02-12T20:30:27.399622911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-pp5np,Uid:9d0140ca-bec5-4a40-aec4-516336896947,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e\"" Feb 12 20:30:27.404157 env[1138]: time="2024-02-12T20:30:27.404107680Z" level=info msg="CreateContainer within sandbox \"f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:30:27.423831 env[1138]: time="2024-02-12T20:30:27.423767378Z" level=info msg="CreateContainer within sandbox \"f6a659984047a7110d7275eb3aa3f2a01ca27318984f869c29fa901a1552297e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38245d9b21903cf4c772c60eb1811866a4da5ca4f6e8cc6e9f30c8f39221c6e3\"" Feb 12 20:30:27.425893 env[1138]: time="2024-02-12T20:30:27.425070580Z" level=info msg="StartContainer for \"38245d9b21903cf4c772c60eb1811866a4da5ca4f6e8cc6e9f30c8f39221c6e3\"" Feb 12 20:30:27.460180 systemd[1]: Started cri-containerd-38245d9b21903cf4c772c60eb1811866a4da5ca4f6e8cc6e9f30c8f39221c6e3.scope. Feb 12 20:30:27.490729 env[1138]: time="2024-02-12T20:30:27.490664671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-qjzrn,Uid:151e2c1c-2d28-44af-825b-4a6d7dd2d2b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e\"" Feb 12 20:30:27.498723 env[1138]: time="2024-02-12T20:30:27.498538197Z" level=info msg="CreateContainer within sandbox \"dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:30:27.541747 env[1138]: time="2024-02-12T20:30:27.541673433Z" level=info msg="StartContainer for \"38245d9b21903cf4c772c60eb1811866a4da5ca4f6e8cc6e9f30c8f39221c6e3\" returns successfully" Feb 12 20:30:27.548115 env[1138]: time="2024-02-12T20:30:27.548037104Z" level=info msg="CreateContainer within sandbox \"dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"69a45386a0530366e0530d3ed0c0f17a1c54fd387b6f73cc9ca7a424867a942f\"" Feb 12 20:30:27.549795 env[1138]: time="2024-02-12T20:30:27.549750581Z" level=info msg="StartContainer for \"69a45386a0530366e0530d3ed0c0f17a1c54fd387b6f73cc9ca7a424867a942f\"" Feb 12 20:30:27.587694 systemd[1]: Started cri-containerd-69a45386a0530366e0530d3ed0c0f17a1c54fd387b6f73cc9ca7a424867a942f.scope. Feb 12 20:30:27.660541 env[1138]: time="2024-02-12T20:30:27.660475834Z" level=info msg="StartContainer for \"69a45386a0530366e0530d3ed0c0f17a1c54fd387b6f73cc9ca7a424867a942f\" returns successfully" Feb 12 20:30:27.850090 kubelet[2023]: I0212 20:30:27.850040 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-pp5np" podStartSLOduration=29.849972196 podCreationTimestamp="2024-02-12 20:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:30:27.84861738 +0000 UTC m=+43.568335741" watchObservedRunningTime="2024-02-12 20:30:27.849972196 +0000 UTC m=+43.569690557" Feb 12 20:30:27.889805 kubelet[2023]: I0212 20:30:27.889756 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-qjzrn" podStartSLOduration=29.889702034 podCreationTimestamp="2024-02-12 20:29:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:30:27.888990051 +0000 UTC m=+43.608708412" watchObservedRunningTime="2024-02-12 20:30:27.889702034 +0000 UTC m=+43.609420396" Feb 12 20:30:28.253579 systemd[1]: run-containerd-runc-k8s.io-dc22a90d852af464b963ab88b6128b028f163c40582ef8eb335abeb4d607c57e-runc.JKC9me.mount: Deactivated successfully. Feb 12 20:30:41.412843 systemd[1]: Started sshd@5-10.128.0.46:22-147.75.109.163:45748.service. Feb 12 20:30:41.702337 sshd[3368]: Accepted publickey for core from 147.75.109.163 port 45748 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:30:41.704419 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:41.711047 systemd-logind[1122]: New session 6 of user core. Feb 12 20:30:41.712318 systemd[1]: Started session-6.scope. Feb 12 20:30:42.006120 sshd[3368]: pam_unix(sshd:session): session closed for user core Feb 12 20:30:42.011112 systemd[1]: sshd@5-10.128.0.46:22-147.75.109.163:45748.service: Deactivated successfully. Feb 12 20:30:42.012288 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:30:42.013302 systemd-logind[1122]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:30:42.015040 systemd-logind[1122]: Removed session 6. Feb 12 20:30:47.053237 systemd[1]: Started sshd@6-10.128.0.46:22-147.75.109.163:35936.service. Feb 12 20:30:47.340431 sshd[3384]: Accepted publickey for core from 147.75.109.163 port 35936 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:30:47.342554 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:47.349016 systemd-logind[1122]: New session 7 of user core. Feb 12 20:30:47.349395 systemd[1]: Started session-7.scope. Feb 12 20:30:47.628122 sshd[3384]: pam_unix(sshd:session): session closed for user core Feb 12 20:30:47.633039 systemd[1]: sshd@6-10.128.0.46:22-147.75.109.163:35936.service: Deactivated successfully. Feb 12 20:30:47.634261 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:30:47.635292 systemd-logind[1122]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:30:47.636895 systemd-logind[1122]: Removed session 7. Feb 12 20:30:52.675782 systemd[1]: Started sshd@7-10.128.0.46:22-147.75.109.163:35950.service. Feb 12 20:30:52.968670 sshd[3397]: Accepted publickey for core from 147.75.109.163 port 35950 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:30:52.970731 sshd[3397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:52.977864 systemd[1]: Started session-8.scope. Feb 12 20:30:52.979329 systemd-logind[1122]: New session 8 of user core. Feb 12 20:30:53.259569 sshd[3397]: pam_unix(sshd:session): session closed for user core Feb 12 20:30:53.264177 systemd[1]: sshd@7-10.128.0.46:22-147.75.109.163:35950.service: Deactivated successfully. Feb 12 20:30:53.265374 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:30:53.266362 systemd-logind[1122]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:30:53.267623 systemd-logind[1122]: Removed session 8. Feb 12 20:30:58.309300 systemd[1]: Started sshd@8-10.128.0.46:22-147.75.109.163:44364.service. Feb 12 20:30:58.609379 sshd[3410]: Accepted publickey for core from 147.75.109.163 port 44364 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:30:58.611075 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:30:58.618060 systemd-logind[1122]: New session 9 of user core. Feb 12 20:30:58.619361 systemd[1]: Started session-9.scope. Feb 12 20:30:58.908369 sshd[3410]: pam_unix(sshd:session): session closed for user core Feb 12 20:30:58.913781 systemd-logind[1122]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:30:58.914087 systemd[1]: sshd@8-10.128.0.46:22-147.75.109.163:44364.service: Deactivated successfully. Feb 12 20:30:58.915255 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:30:58.916485 systemd-logind[1122]: Removed session 9. Feb 12 20:31:03.955618 systemd[1]: Started sshd@9-10.128.0.46:22-147.75.109.163:44376.service. Feb 12 20:31:04.247675 sshd[3425]: Accepted publickey for core from 147.75.109.163 port 44376 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:04.249813 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:04.257957 systemd[1]: Started session-10.scope. Feb 12 20:31:04.258616 systemd-logind[1122]: New session 10 of user core. Feb 12 20:31:04.538636 sshd[3425]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:04.544715 systemd-logind[1122]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:31:04.545187 systemd[1]: sshd@9-10.128.0.46:22-147.75.109.163:44376.service: Deactivated successfully. Feb 12 20:31:04.546540 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:31:04.547854 systemd-logind[1122]: Removed session 10. Feb 12 20:31:04.587011 systemd[1]: Started sshd@10-10.128.0.46:22-147.75.109.163:60920.service. Feb 12 20:31:04.881865 sshd[3437]: Accepted publickey for core from 147.75.109.163 port 60920 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:04.884155 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:04.891871 systemd[1]: Started session-11.scope. Feb 12 20:31:04.893283 systemd-logind[1122]: New session 11 of user core. Feb 12 20:31:05.993547 sshd[3437]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:05.998638 systemd[1]: sshd@10-10.128.0.46:22-147.75.109.163:60920.service: Deactivated successfully. Feb 12 20:31:05.999720 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:31:06.000020 systemd-logind[1122]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:31:06.001840 systemd-logind[1122]: Removed session 11. Feb 12 20:31:06.041544 systemd[1]: Started sshd@11-10.128.0.46:22-147.75.109.163:60928.service. Feb 12 20:31:06.331134 sshd[3448]: Accepted publickey for core from 147.75.109.163 port 60928 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:06.333212 sshd[3448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:06.340507 systemd-logind[1122]: New session 12 of user core. Feb 12 20:31:06.340642 systemd[1]: Started session-12.scope. Feb 12 20:31:06.622460 sshd[3448]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:06.627023 systemd[1]: sshd@11-10.128.0.46:22-147.75.109.163:60928.service: Deactivated successfully. Feb 12 20:31:06.628256 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:31:06.629317 systemd-logind[1122]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:31:06.630655 systemd-logind[1122]: Removed session 12. Feb 12 20:31:11.670766 systemd[1]: Started sshd@12-10.128.0.46:22-147.75.109.163:60932.service. Feb 12 20:31:11.960611 sshd[3460]: Accepted publickey for core from 147.75.109.163 port 60932 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:11.962840 sshd[3460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:11.969972 systemd[1]: Started session-13.scope. Feb 12 20:31:11.971025 systemd-logind[1122]: New session 13 of user core. Feb 12 20:31:12.252389 sshd[3460]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:12.257441 systemd[1]: sshd@12-10.128.0.46:22-147.75.109.163:60932.service: Deactivated successfully. Feb 12 20:31:12.258604 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:31:12.259980 systemd-logind[1122]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:31:12.261259 systemd-logind[1122]: Removed session 13. Feb 12 20:31:17.301583 systemd[1]: Started sshd@13-10.128.0.46:22-147.75.109.163:38544.service. Feb 12 20:31:17.594002 sshd[3472]: Accepted publickey for core from 147.75.109.163 port 38544 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:17.595968 sshd[3472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:17.603186 systemd[1]: Started session-14.scope. Feb 12 20:31:17.604080 systemd-logind[1122]: New session 14 of user core. Feb 12 20:31:17.889246 sshd[3472]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:17.894399 systemd[1]: sshd@13-10.128.0.46:22-147.75.109.163:38544.service: Deactivated successfully. Feb 12 20:31:17.895691 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:31:17.896712 systemd-logind[1122]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:31:17.898156 systemd-logind[1122]: Removed session 14. Feb 12 20:31:17.935287 systemd[1]: Started sshd@14-10.128.0.46:22-147.75.109.163:38554.service. Feb 12 20:31:18.225353 sshd[3484]: Accepted publickey for core from 147.75.109.163 port 38554 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:18.226789 sshd[3484]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:18.233990 systemd[1]: Started session-15.scope. Feb 12 20:31:18.235120 systemd-logind[1122]: New session 15 of user core. Feb 12 20:31:18.591699 sshd[3484]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:18.598530 systemd[1]: sshd@14-10.128.0.46:22-147.75.109.163:38554.service: Deactivated successfully. Feb 12 20:31:18.599712 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:31:18.600609 systemd-logind[1122]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:31:18.602091 systemd-logind[1122]: Removed session 15. Feb 12 20:31:18.638423 systemd[1]: Started sshd@15-10.128.0.46:22-147.75.109.163:38566.service. Feb 12 20:31:18.929864 sshd[3494]: Accepted publickey for core from 147.75.109.163 port 38566 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:18.931655 sshd[3494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:18.939029 systemd[1]: Started session-16.scope. Feb 12 20:31:18.940168 systemd-logind[1122]: New session 16 of user core. Feb 12 20:31:20.038846 sshd[3494]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:20.044102 systemd-logind[1122]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:31:20.046386 systemd[1]: sshd@15-10.128.0.46:22-147.75.109.163:38566.service: Deactivated successfully. Feb 12 20:31:20.047590 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:31:20.049770 systemd-logind[1122]: Removed session 16. Feb 12 20:31:20.083785 systemd[1]: Started sshd@16-10.128.0.46:22-147.75.109.163:38572.service. Feb 12 20:31:20.375347 sshd[3513]: Accepted publickey for core from 147.75.109.163 port 38572 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:20.377354 sshd[3513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:20.384847 systemd[1]: Started session-17.scope. Feb 12 20:31:20.385779 systemd-logind[1122]: New session 17 of user core. Feb 12 20:31:20.958005 sshd[3513]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:20.963025 systemd-logind[1122]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:31:20.964183 systemd[1]: sshd@16-10.128.0.46:22-147.75.109.163:38572.service: Deactivated successfully. Feb 12 20:31:20.965439 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:31:20.966502 systemd-logind[1122]: Removed session 17. Feb 12 20:31:21.004658 systemd[1]: Started sshd@17-10.128.0.46:22-147.75.109.163:38576.service. Feb 12 20:31:21.297639 sshd[3523]: Accepted publickey for core from 147.75.109.163 port 38576 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:21.299868 sshd[3523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:21.306228 systemd-logind[1122]: New session 18 of user core. Feb 12 20:31:21.306995 systemd[1]: Started session-18.scope. Feb 12 20:31:21.586326 sshd[3523]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:21.590761 systemd[1]: sshd@17-10.128.0.46:22-147.75.109.163:38576.service: Deactivated successfully. Feb 12 20:31:21.591808 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:31:21.593355 systemd-logind[1122]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:31:21.594982 systemd-logind[1122]: Removed session 18. Feb 12 20:31:26.634074 systemd[1]: Started sshd@18-10.128.0.46:22-147.75.109.163:48926.service. Feb 12 20:31:26.927089 sshd[3534]: Accepted publickey for core from 147.75.109.163 port 48926 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:26.929847 sshd[3534]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:26.937456 systemd[1]: Started session-19.scope. Feb 12 20:31:26.938701 systemd-logind[1122]: New session 19 of user core. Feb 12 20:31:27.221727 sshd[3534]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:27.227065 systemd[1]: sshd@18-10.128.0.46:22-147.75.109.163:48926.service: Deactivated successfully. Feb 12 20:31:27.228528 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:31:27.229894 systemd-logind[1122]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:31:27.231355 systemd-logind[1122]: Removed session 19. Feb 12 20:31:32.268513 systemd[1]: Started sshd@19-10.128.0.46:22-147.75.109.163:48928.service. Feb 12 20:31:32.569979 sshd[3551]: Accepted publickey for core from 147.75.109.163 port 48928 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:32.570938 sshd[3551]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:32.579328 systemd[1]: Started session-20.scope. Feb 12 20:31:32.580372 systemd-logind[1122]: New session 20 of user core. Feb 12 20:31:32.862592 sshd[3551]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:32.867316 systemd[1]: sshd@19-10.128.0.46:22-147.75.109.163:48928.service: Deactivated successfully. Feb 12 20:31:32.868471 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:31:32.869832 systemd-logind[1122]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:31:32.871488 systemd-logind[1122]: Removed session 20. Feb 12 20:31:37.909794 systemd[1]: Started sshd@20-10.128.0.46:22-147.75.109.163:42614.service. Feb 12 20:31:38.200213 sshd[3563]: Accepted publickey for core from 147.75.109.163 port 42614 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:38.202316 sshd[3563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:38.209321 systemd[1]: Started session-21.scope. Feb 12 20:31:38.209976 systemd-logind[1122]: New session 21 of user core. Feb 12 20:31:38.536790 sshd[3563]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:38.551167 systemd[1]: sshd@20-10.128.0.46:22-147.75.109.163:42614.service: Deactivated successfully. Feb 12 20:31:38.553630 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:31:38.554849 systemd-logind[1122]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:31:38.557890 systemd-logind[1122]: Removed session 21. Feb 12 20:31:38.574565 systemd[1]: Started sshd@21-10.128.0.46:22-147.75.109.163:42630.service. Feb 12 20:31:38.869681 sshd[3575]: Accepted publickey for core from 147.75.109.163 port 42630 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:38.871705 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:38.878820 systemd[1]: Started session-22.scope. Feb 12 20:31:38.880455 systemd-logind[1122]: New session 22 of user core. Feb 12 20:31:40.999546 env[1138]: time="2024-02-12T20:31:40.998814833Z" level=info msg="StopContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" with timeout 30 (s)" Feb 12 20:31:40.999546 env[1138]: time="2024-02-12T20:31:40.999367832Z" level=info msg="Stop container \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" with signal terminated" Feb 12 20:31:41.028525 systemd[1]: cri-containerd-186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9.scope: Deactivated successfully. Feb 12 20:31:41.049330 systemd[1]: run-containerd-runc-k8s.io-27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe-runc.7ghdqg.mount: Deactivated successfully. Feb 12 20:31:41.066284 env[1138]: time="2024-02-12T20:31:41.066201600Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:31:41.076983 env[1138]: time="2024-02-12T20:31:41.076874626Z" level=info msg="StopContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" with timeout 1 (s)" Feb 12 20:31:41.077667 env[1138]: time="2024-02-12T20:31:41.077619864Z" level=info msg="Stop container \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" with signal terminated" Feb 12 20:31:41.092188 systemd-networkd[1019]: lxc_health: Link DOWN Feb 12 20:31:41.092202 systemd-networkd[1019]: lxc_health: Lost carrier Feb 12 20:31:41.099114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9-rootfs.mount: Deactivated successfully. Feb 12 20:31:41.117647 systemd[1]: cri-containerd-27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe.scope: Deactivated successfully. Feb 12 20:31:41.118222 systemd[1]: cri-containerd-27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe.scope: Consumed 9.509s CPU time. Feb 12 20:31:41.146472 env[1138]: time="2024-02-12T20:31:41.146402364Z" level=info msg="shim disconnected" id=186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9 Feb 12 20:31:41.146472 env[1138]: time="2024-02-12T20:31:41.146473085Z" level=warning msg="cleaning up after shim disconnected" id=186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9 namespace=k8s.io Feb 12 20:31:41.146846 env[1138]: time="2024-02-12T20:31:41.146489468Z" level=info msg="cleaning up dead shim" Feb 12 20:31:41.163892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe-rootfs.mount: Deactivated successfully. Feb 12 20:31:41.172407 env[1138]: time="2024-02-12T20:31:41.172340638Z" level=info msg="shim disconnected" id=27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe Feb 12 20:31:41.172695 env[1138]: time="2024-02-12T20:31:41.172664874Z" level=warning msg="cleaning up after shim disconnected" id=27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe namespace=k8s.io Feb 12 20:31:41.172850 env[1138]: time="2024-02-12T20:31:41.172826942Z" level=info msg="cleaning up dead shim" Feb 12 20:31:41.182846 env[1138]: time="2024-02-12T20:31:41.182790571Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3645 runtime=io.containerd.runc.v2\n" Feb 12 20:31:41.184976 env[1138]: time="2024-02-12T20:31:41.184875427Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3653 runtime=io.containerd.runc.v2\n" Feb 12 20:31:41.186242 env[1138]: time="2024-02-12T20:31:41.186186153Z" level=info msg="StopContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" returns successfully" Feb 12 20:31:41.187281 env[1138]: time="2024-02-12T20:31:41.187163571Z" level=info msg="StopPodSandbox for \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\"" Feb 12 20:31:41.187281 env[1138]: time="2024-02-12T20:31:41.187269552Z" level=info msg="Container to stop \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.194192 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae-shm.mount: Deactivated successfully. Feb 12 20:31:41.195981 env[1138]: time="2024-02-12T20:31:41.195882876Z" level=info msg="StopContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" returns successfully" Feb 12 20:31:41.196971 env[1138]: time="2024-02-12T20:31:41.196870545Z" level=info msg="StopPodSandbox for \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\"" Feb 12 20:31:41.197156 env[1138]: time="2024-02-12T20:31:41.196989103Z" level=info msg="Container to stop \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.197156 env[1138]: time="2024-02-12T20:31:41.197015870Z" level=info msg="Container to stop \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.197156 env[1138]: time="2024-02-12T20:31:41.197034394Z" level=info msg="Container to stop \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.197156 env[1138]: time="2024-02-12T20:31:41.197054840Z" level=info msg="Container to stop \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.197156 env[1138]: time="2024-02-12T20:31:41.197072809Z" level=info msg="Container to stop \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:41.202616 systemd[1]: cri-containerd-f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae.scope: Deactivated successfully. Feb 12 20:31:41.212107 systemd[1]: cri-containerd-eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88.scope: Deactivated successfully. Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.252607976Z" level=info msg="shim disconnected" id=f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.252665324Z" level=warning msg="cleaning up after shim disconnected" id=f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae namespace=k8s.io Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.252680556Z" level=info msg="cleaning up dead shim" Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.253482649Z" level=info msg="shim disconnected" id=eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88 Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.253530831Z" level=warning msg="cleaning up after shim disconnected" id=eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88 namespace=k8s.io Feb 12 20:31:41.254413 env[1138]: time="2024-02-12T20:31:41.253546271Z" level=info msg="cleaning up dead shim" Feb 12 20:31:41.272048 env[1138]: time="2024-02-12T20:31:41.271987909Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" Feb 12 20:31:41.272725 env[1138]: time="2024-02-12T20:31:41.272677907Z" level=info msg="TearDown network for sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" successfully" Feb 12 20:31:41.272725 env[1138]: time="2024-02-12T20:31:41.272721174Z" level=info msg="StopPodSandbox for \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" returns successfully" Feb 12 20:31:41.277302 env[1138]: time="2024-02-12T20:31:41.277259152Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3710 runtime=io.containerd.runc.v2\n" Feb 12 20:31:41.278337 env[1138]: time="2024-02-12T20:31:41.278292092Z" level=info msg="TearDown network for sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" successfully" Feb 12 20:31:41.278553 env[1138]: time="2024-02-12T20:31:41.278522699Z" level=info msg="StopPodSandbox for \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" returns successfully" Feb 12 20:31:41.343655 kubelet[2023]: I0212 20:31:41.343586 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-run\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.343655 kubelet[2023]: I0212 20:31:41.343649 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-hostproc\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343689 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkm8s\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343720 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57935086-1e22-408a-8244-b62b48f4fd0b-cilium-config-path\") pod \"57935086-1e22-408a-8244-b62b48f4fd0b\" (UID: \"57935086-1e22-408a-8244-b62b48f4fd0b\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343747 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-etc-cni-netd\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343782 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-cgroup\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343815 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-lib-modules\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344365 kubelet[2023]: I0212 20:31:41.343852 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-hubble-tls\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.343883 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-xtables-lock\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.343934 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-kernel\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.343974 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-config-path\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.344006 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-bpf-maps\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.344034 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cni-path\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.344701 kubelet[2023]: I0212 20:31:41.344070 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-net\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.345255 kubelet[2023]: I0212 20:31:41.344119 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e60b583c-38a7-4213-a7ee-a6208be24fe2-clustermesh-secrets\") pod \"e60b583c-38a7-4213-a7ee-a6208be24fe2\" (UID: \"e60b583c-38a7-4213-a7ee-a6208be24fe2\") " Feb 12 20:31:41.345255 kubelet[2023]: I0212 20:31:41.344156 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcs9s\" (UniqueName: \"kubernetes.io/projected/57935086-1e22-408a-8244-b62b48f4fd0b-kube-api-access-mcs9s\") pod \"57935086-1e22-408a-8244-b62b48f4fd0b\" (UID: \"57935086-1e22-408a-8244-b62b48f4fd0b\") " Feb 12 20:31:41.346012 kubelet[2023]: I0212 20:31:41.345968 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.346165 kubelet[2023]: I0212 20:31:41.346045 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.346309 kubelet[2023]: W0212 20:31:41.346261 2023 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/e60b583c-38a7-4213-a7ee-a6208be24fe2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:31:41.346741 kubelet[2023]: I0212 20:31:41.346692 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.346938 kubelet[2023]: I0212 20:31:41.346898 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-hostproc" (OuterVolumeSpecName: "hostproc") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.349803 kubelet[2023]: I0212 20:31:41.349761 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:31:41.349965 kubelet[2023]: I0212 20:31:41.349839 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.349965 kubelet[2023]: I0212 20:31:41.349873 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cni-path" (OuterVolumeSpecName: "cni-path") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.349965 kubelet[2023]: I0212 20:31:41.349903 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.350486 kubelet[2023]: I0212 20:31:41.350416 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.350891 kubelet[2023]: W0212 20:31:41.350847 2023 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/57935086-1e22-408a-8244-b62b48f4fd0b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:31:41.353786 kubelet[2023]: I0212 20:31:41.353745 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57935086-1e22-408a-8244-b62b48f4fd0b-kube-api-access-mcs9s" (OuterVolumeSpecName: "kube-api-access-mcs9s") pod "57935086-1e22-408a-8244-b62b48f4fd0b" (UID: "57935086-1e22-408a-8244-b62b48f4fd0b"). InnerVolumeSpecName "kube-api-access-mcs9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:31:41.356117 kubelet[2023]: I0212 20:31:41.356071 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57935086-1e22-408a-8244-b62b48f4fd0b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "57935086-1e22-408a-8244-b62b48f4fd0b" (UID: "57935086-1e22-408a-8244-b62b48f4fd0b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:31:41.356262 kubelet[2023]: I0212 20:31:41.356152 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.356262 kubelet[2023]: I0212 20:31:41.356189 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:41.358473 kubelet[2023]: I0212 20:31:41.358434 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:31:41.360483 kubelet[2023]: I0212 20:31:41.360448 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s" (OuterVolumeSpecName: "kube-api-access-pkm8s") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "kube-api-access-pkm8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:31:41.363363 kubelet[2023]: I0212 20:31:41.363325 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e60b583c-38a7-4213-a7ee-a6208be24fe2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e60b583c-38a7-4213-a7ee-a6208be24fe2" (UID: "e60b583c-38a7-4213-a7ee-a6208be24fe2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:31:41.445143 kubelet[2023]: I0212 20:31:41.445086 2023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-kernel\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445143 kubelet[2023]: I0212 20:31:41.445141 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-config-path\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445143 kubelet[2023]: I0212 20:31:41.445165 2023 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-bpf-maps\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445182 2023 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cni-path\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445206 2023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-host-proc-sys-net\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445227 2023 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e60b583c-38a7-4213-a7ee-a6208be24fe2-clustermesh-secrets\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445246 2023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mcs9s\" (UniqueName: \"kubernetes.io/projected/57935086-1e22-408a-8244-b62b48f4fd0b-kube-api-access-mcs9s\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445265 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-run\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445281 2023 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-hostproc\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445535 kubelet[2023]: I0212 20:31:41.445298 2023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pkm8s\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-kube-api-access-pkm8s\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445316 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57935086-1e22-408a-8244-b62b48f4fd0b-cilium-config-path\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445334 2023 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-etc-cni-netd\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445352 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-cilium-cgroup\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445370 2023 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-lib-modules\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445388 2023 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e60b583c-38a7-4213-a7ee-a6208be24fe2-hubble-tls\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:41.445775 kubelet[2023]: I0212 20:31:41.445405 2023 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e60b583c-38a7-4213-a7ee-a6208be24fe2-xtables-lock\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:42.013569 kubelet[2023]: I0212 20:31:42.013527 2023 scope.go:115] "RemoveContainer" containerID="186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9" Feb 12 20:31:42.016897 env[1138]: time="2024-02-12T20:31:42.016393626Z" level=info msg="RemoveContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\"" Feb 12 20:31:42.025871 systemd[1]: Removed slice kubepods-besteffort-pod57935086_1e22_408a_8244_b62b48f4fd0b.slice. Feb 12 20:31:42.030937 env[1138]: time="2024-02-12T20:31:42.030830978Z" level=info msg="RemoveContainer for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" returns successfully" Feb 12 20:31:42.037544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88-rootfs.mount: Deactivated successfully. Feb 12 20:31:42.037724 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88-shm.mount: Deactivated successfully. Feb 12 20:31:42.037846 systemd[1]: var-lib-kubelet-pods-e60b583c\x2d38a7\x2d4213\x2da7ee\x2da6208be24fe2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpkm8s.mount: Deactivated successfully. Feb 12 20:31:42.037973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae-rootfs.mount: Deactivated successfully. Feb 12 20:31:42.038111 systemd[1]: var-lib-kubelet-pods-57935086\x2d1e22\x2d408a\x2d8244\x2db62b48f4fd0b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcs9s.mount: Deactivated successfully. Feb 12 20:31:42.038224 systemd[1]: var-lib-kubelet-pods-e60b583c\x2d38a7\x2d4213\x2da7ee\x2da6208be24fe2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:31:42.038317 systemd[1]: var-lib-kubelet-pods-e60b583c\x2d38a7\x2d4213\x2da7ee\x2da6208be24fe2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:31:42.040234 kubelet[2023]: I0212 20:31:42.040191 2023 scope.go:115] "RemoveContainer" containerID="186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9" Feb 12 20:31:42.042490 env[1138]: time="2024-02-12T20:31:42.042345584Z" level=error msg="ContainerStatus for \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\": not found" Feb 12 20:31:42.044028 systemd[1]: Removed slice kubepods-burstable-pode60b583c_38a7_4213_a7ee_a6208be24fe2.slice. Feb 12 20:31:42.044199 systemd[1]: kubepods-burstable-pode60b583c_38a7_4213_a7ee_a6208be24fe2.slice: Consumed 9.677s CPU time. Feb 12 20:31:42.046680 kubelet[2023]: E0212 20:31:42.045324 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\": not found" containerID="186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9" Feb 12 20:31:42.046680 kubelet[2023]: I0212 20:31:42.045410 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9} err="failed to get container status \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\": rpc error: code = NotFound desc = an error occurred when try to find container \"186b74f17c3294bdb87a0165bee366170bc9768aaafc1fcc69450addc72ed3f9\": not found" Feb 12 20:31:42.046680 kubelet[2023]: I0212 20:31:42.045433 2023 scope.go:115] "RemoveContainer" containerID="27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe" Feb 12 20:31:42.048446 env[1138]: time="2024-02-12T20:31:42.047582212Z" level=info msg="RemoveContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\"" Feb 12 20:31:42.055908 env[1138]: time="2024-02-12T20:31:42.055830055Z" level=info msg="RemoveContainer for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" returns successfully" Feb 12 20:31:42.058165 kubelet[2023]: I0212 20:31:42.058100 2023 scope.go:115] "RemoveContainer" containerID="2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e" Feb 12 20:31:42.060987 env[1138]: time="2024-02-12T20:31:42.060945774Z" level=info msg="RemoveContainer for \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\"" Feb 12 20:31:42.070188 env[1138]: time="2024-02-12T20:31:42.070117722Z" level=info msg="RemoveContainer for \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\" returns successfully" Feb 12 20:31:42.070802 kubelet[2023]: I0212 20:31:42.070769 2023 scope.go:115] "RemoveContainer" containerID="cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa" Feb 12 20:31:42.072465 env[1138]: time="2024-02-12T20:31:42.072415626Z" level=info msg="RemoveContainer for \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\"" Feb 12 20:31:42.077850 env[1138]: time="2024-02-12T20:31:42.077780590Z" level=info msg="RemoveContainer for \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\" returns successfully" Feb 12 20:31:42.078090 kubelet[2023]: I0212 20:31:42.078062 2023 scope.go:115] "RemoveContainer" containerID="acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8" Feb 12 20:31:42.079900 env[1138]: time="2024-02-12T20:31:42.079849336Z" level=info msg="RemoveContainer for \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\"" Feb 12 20:31:42.085052 env[1138]: time="2024-02-12T20:31:42.084939445Z" level=info msg="RemoveContainer for \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\" returns successfully" Feb 12 20:31:42.085293 kubelet[2023]: I0212 20:31:42.085260 2023 scope.go:115] "RemoveContainer" containerID="f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c" Feb 12 20:31:42.086845 env[1138]: time="2024-02-12T20:31:42.086788999Z" level=info msg="RemoveContainer for \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\"" Feb 12 20:31:42.091656 env[1138]: time="2024-02-12T20:31:42.091593077Z" level=info msg="RemoveContainer for \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\" returns successfully" Feb 12 20:31:42.091937 kubelet[2023]: I0212 20:31:42.091876 2023 scope.go:115] "RemoveContainer" containerID="27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe" Feb 12 20:31:42.092421 env[1138]: time="2024-02-12T20:31:42.092324436Z" level=error msg="ContainerStatus for \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\": not found" Feb 12 20:31:42.093423 kubelet[2023]: E0212 20:31:42.093368 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\": not found" containerID="27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe" Feb 12 20:31:42.093560 kubelet[2023]: I0212 20:31:42.093430 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe} err="failed to get container status \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"27580b92bd75dd6afe3ba7a113249b92cf0f22d866abb6d6c80878078ec2e2fe\": not found" Feb 12 20:31:42.093560 kubelet[2023]: I0212 20:31:42.093453 2023 scope.go:115] "RemoveContainer" containerID="2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e" Feb 12 20:31:42.093792 env[1138]: time="2024-02-12T20:31:42.093711250Z" level=error msg="ContainerStatus for \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\": not found" Feb 12 20:31:42.094095 kubelet[2023]: E0212 20:31:42.094073 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\": not found" containerID="2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e" Feb 12 20:31:42.094371 kubelet[2023]: I0212 20:31:42.094117 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e} err="failed to get container status \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ba81bad0f6ea360e7683719fd69321cfd7e14604b4e4d5709131784e2faee3e\": not found" Feb 12 20:31:42.094371 kubelet[2023]: I0212 20:31:42.094136 2023 scope.go:115] "RemoveContainer" containerID="cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa" Feb 12 20:31:42.094623 env[1138]: time="2024-02-12T20:31:42.094406369Z" level=error msg="ContainerStatus for \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\": not found" Feb 12 20:31:42.094791 kubelet[2023]: E0212 20:31:42.094768 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\": not found" containerID="cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa" Feb 12 20:31:42.094888 kubelet[2023]: I0212 20:31:42.094818 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa} err="failed to get container status \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd40bbec19b7991f3c0bcbe4fa89f5ec3b0aba51b14f11c7701d2880f81596aa\": not found" Feb 12 20:31:42.094888 kubelet[2023]: I0212 20:31:42.094835 2023 scope.go:115] "RemoveContainer" containerID="acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8" Feb 12 20:31:42.095154 env[1138]: time="2024-02-12T20:31:42.095071897Z" level=error msg="ContainerStatus for \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\": not found" Feb 12 20:31:42.095357 kubelet[2023]: E0212 20:31:42.095303 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\": not found" containerID="acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8" Feb 12 20:31:42.095357 kubelet[2023]: I0212 20:31:42.095358 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8} err="failed to get container status \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\": rpc error: code = NotFound desc = an error occurred when try to find container \"acf7c3515a6ccdff3d58d4fb6c3116905ea77dfcfe7b40067f77102eeb2e97b8\": not found" Feb 12 20:31:42.095535 kubelet[2023]: I0212 20:31:42.095375 2023 scope.go:115] "RemoveContainer" containerID="f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c" Feb 12 20:31:42.095675 env[1138]: time="2024-02-12T20:31:42.095605196Z" level=error msg="ContainerStatus for \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\": not found" Feb 12 20:31:42.095981 kubelet[2023]: E0212 20:31:42.095823 2023 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\": not found" containerID="f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c" Feb 12 20:31:42.096095 kubelet[2023]: I0212 20:31:42.096002 2023 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c} err="failed to get container status \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1b5bad6078baf3ded3f2447e6049da49b6e59aa315b5d1c82c294d9645a455c\": not found" Feb 12 20:31:42.593476 kubelet[2023]: I0212 20:31:42.593405 2023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=57935086-1e22-408a-8244-b62b48f4fd0b path="/var/lib/kubelet/pods/57935086-1e22-408a-8244-b62b48f4fd0b/volumes" Feb 12 20:31:42.594270 kubelet[2023]: I0212 20:31:42.594225 2023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=e60b583c-38a7-4213-a7ee-a6208be24fe2 path="/var/lib/kubelet/pods/e60b583c-38a7-4213-a7ee-a6208be24fe2/volumes" Feb 12 20:31:42.984589 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:42.989106 systemd[1]: sshd@21-10.128.0.46:22-147.75.109.163:42630.service: Deactivated successfully. Feb 12 20:31:42.990186 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:31:42.990434 systemd[1]: session-22.scope: Consumed 1.368s CPU time. Feb 12 20:31:42.991243 systemd-logind[1122]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:31:42.992519 systemd-logind[1122]: Removed session 22. Feb 12 20:31:43.033410 systemd[1]: Started sshd@22-10.128.0.46:22-147.75.109.163:42644.service. Feb 12 20:31:43.323203 sshd[3742]: Accepted publickey for core from 147.75.109.163 port 42644 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:43.325173 sshd[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:43.332037 systemd[1]: Started session-23.scope. Feb 12 20:31:43.332624 systemd-logind[1122]: New session 23 of user core. Feb 12 20:31:44.066325 sshd[3742]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:44.071904 systemd-logind[1122]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:31:44.075177 systemd[1]: sshd@22-10.128.0.46:22-147.75.109.163:42644.service: Deactivated successfully. Feb 12 20:31:44.076427 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:31:44.080560 systemd-logind[1122]: Removed session 23. Feb 12 20:31:44.117172 systemd[1]: Started sshd@23-10.128.0.46:22-147.75.109.163:42660.service. Feb 12 20:31:44.155415 kubelet[2023]: I0212 20:31:44.155350 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155442 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="mount-cgroup" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155458 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="apply-sysctl-overwrites" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155469 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="mount-bpf-fs" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155480 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="57935086-1e22-408a-8244-b62b48f4fd0b" containerName="cilium-operator" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155493 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="clean-cilium-state" Feb 12 20:31:44.156013 kubelet[2023]: E0212 20:31:44.155504 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="cilium-agent" Feb 12 20:31:44.156013 kubelet[2023]: I0212 20:31:44.155568 2023 memory_manager.go:346] "RemoveStaleState removing state" podUID="e60b583c-38a7-4213-a7ee-a6208be24fe2" containerName="cilium-agent" Feb 12 20:31:44.156013 kubelet[2023]: I0212 20:31:44.155582 2023 memory_manager.go:346] "RemoveStaleState removing state" podUID="57935086-1e22-408a-8244-b62b48f4fd0b" containerName="cilium-operator" Feb 12 20:31:44.165078 systemd[1]: Created slice kubepods-burstable-pod11c7c08d_5440_4e3d_bc4c_867997861322.slice. Feb 12 20:31:44.174219 kubelet[2023]: W0212 20:31:44.174175 2023 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.174219 kubelet[2023]: E0212 20:31:44.174232 2023 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.174502 kubelet[2023]: W0212 20:31:44.174289 2023 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.174502 kubelet[2023]: E0212 20:31:44.174304 2023 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.176426 kubelet[2023]: W0212 20:31:44.176365 2023 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.176426 kubelet[2023]: E0212 20:31:44.176431 2023 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.191325 kubelet[2023]: W0212 20:31:44.191276 2023 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.191325 kubelet[2023]: E0212 20:31:44.191328 2023 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal' and this object Feb 12 20:31:44.266734 kubelet[2023]: I0212 20:31:44.266674 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-cgroup\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.266734 kubelet[2023]: I0212 20:31:44.266744 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cni-path\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266779 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266837 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-kernel\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266871 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-net\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266900 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-run\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266944 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-bpf-maps\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267044 kubelet[2023]: I0212 20:31:44.266976 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-hostproc\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267008 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-etc-cni-netd\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267041 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-lib-modules\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267077 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-xtables-lock\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267127 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267163 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267401 kubelet[2023]: I0212 20:31:44.267200 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.267734 kubelet[2023]: I0212 20:31:44.267243 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz7bc\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-kube-api-access-rz7bc\") pod \"cilium-stlq6\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " pod="kube-system/cilium-stlq6" Feb 12 20:31:44.419105 sshd[3752]: Accepted publickey for core from 147.75.109.163 port 42660 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:44.421646 sshd[3752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:44.429785 systemd[1]: Started session-24.scope. Feb 12 20:31:44.430389 systemd-logind[1122]: New session 24 of user core. Feb 12 20:31:44.526110 env[1138]: time="2024-02-12T20:31:44.525813508Z" level=info msg="StopPodSandbox for \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\"" Feb 12 20:31:44.526110 env[1138]: time="2024-02-12T20:31:44.525969404Z" level=info msg="TearDown network for sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" successfully" Feb 12 20:31:44.526110 env[1138]: time="2024-02-12T20:31:44.526022914Z" level=info msg="StopPodSandbox for \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" returns successfully" Feb 12 20:31:44.527641 env[1138]: time="2024-02-12T20:31:44.527599074Z" level=info msg="RemovePodSandbox for \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\"" Feb 12 20:31:44.527799 env[1138]: time="2024-02-12T20:31:44.527649183Z" level=info msg="Forcibly stopping sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\"" Feb 12 20:31:44.527799 env[1138]: time="2024-02-12T20:31:44.527761976Z" level=info msg="TearDown network for sandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" successfully" Feb 12 20:31:44.535429 env[1138]: time="2024-02-12T20:31:44.535172446Z" level=info msg="RemovePodSandbox \"eef8006556d668eda3a7cc874c94df6b5c6859960b4afd28f81b3e322bb17a88\" returns successfully" Feb 12 20:31:44.538164 env[1138]: time="2024-02-12T20:31:44.538114141Z" level=info msg="StopPodSandbox for \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\"" Feb 12 20:31:44.538339 env[1138]: time="2024-02-12T20:31:44.538255045Z" level=info msg="TearDown network for sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" successfully" Feb 12 20:31:44.538339 env[1138]: time="2024-02-12T20:31:44.538307590Z" level=info msg="StopPodSandbox for \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" returns successfully" Feb 12 20:31:44.538783 env[1138]: time="2024-02-12T20:31:44.538724192Z" level=info msg="RemovePodSandbox for \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\"" Feb 12 20:31:44.538951 env[1138]: time="2024-02-12T20:31:44.538771094Z" level=info msg="Forcibly stopping sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\"" Feb 12 20:31:44.538951 env[1138]: time="2024-02-12T20:31:44.538875184Z" level=info msg="TearDown network for sandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" successfully" Feb 12 20:31:44.545580 env[1138]: time="2024-02-12T20:31:44.545502995Z" level=info msg="RemovePodSandbox \"f171ea4759cd630588faea7e66e45509d2c2daab6e054df129bf29b3bca672ae\" returns successfully" Feb 12 20:31:44.730115 sshd[3752]: pam_unix(sshd:session): session closed for user core Feb 12 20:31:44.734707 systemd[1]: sshd@23-10.128.0.46:22-147.75.109.163:42660.service: Deactivated successfully. Feb 12 20:31:44.735597 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:31:44.737291 systemd-logind[1122]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:31:44.738505 systemd-logind[1122]: Removed session 24. Feb 12 20:31:44.770762 kubelet[2023]: E0212 20:31:44.770726 2023 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:31:44.777607 systemd[1]: Started sshd@24-10.128.0.46:22-147.75.109.163:49196.service. Feb 12 20:31:45.073542 sshd[3768]: Accepted publickey for core from 147.75.109.163 port 49196 ssh2: RSA SHA256:xlSJPj37rpshD+I6cqqeKxL8SH/zhZoYeHdGs1pWqxc Feb 12 20:31:45.075966 sshd[3768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:31:45.086454 systemd[1]: Started session-25.scope. Feb 12 20:31:45.087826 systemd-logind[1122]: New session 25 of user core. Feb 12 20:31:45.369384 kubelet[2023]: E0212 20:31:45.369237 2023 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:31:45.370115 kubelet[2023]: E0212 20:31:45.370088 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path podName:11c7c08d-5440-4e3d-bc4c-867997861322 nodeName:}" failed. No retries permitted until 2024-02-12 20:31:45.870055397 +0000 UTC m=+121.589773758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path") pod "cilium-stlq6" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:31:45.370416 kubelet[2023]: E0212 20:31:45.370396 2023 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.370594 kubelet[2023]: E0212 20:31:45.370578 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets podName:11c7c08d-5440-4e3d-bc4c-867997861322 nodeName:}" failed. No retries permitted until 2024-02-12 20:31:45.870554866 +0000 UTC m=+121.590273226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets") pod "cilium-stlq6" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.371247 kubelet[2023]: E0212 20:31:45.370979 2023 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.371530 kubelet[2023]: E0212 20:31:45.371495 2023 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-stlq6: failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.371755 kubelet[2023]: E0212 20:31:45.371733 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls podName:11c7c08d-5440-4e3d-bc4c-867997861322 nodeName:}" failed. No retries permitted until 2024-02-12 20:31:45.871709839 +0000 UTC m=+121.591428198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls") pod "cilium-stlq6" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.372662 kubelet[2023]: E0212 20:31:45.372604 2023 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.373049 kubelet[2023]: E0212 20:31:45.373032 2023 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets podName:11c7c08d-5440-4e3d-bc4c-867997861322 nodeName:}" failed. No retries permitted until 2024-02-12 20:31:45.87297219 +0000 UTC m=+121.592690540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets") pod "cilium-stlq6" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:31:45.977827 env[1138]: time="2024-02-12T20:31:45.977763454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-stlq6,Uid:11c7c08d-5440-4e3d-bc4c-867997861322,Namespace:kube-system,Attempt:0,}" Feb 12 20:31:46.009117 env[1138]: time="2024-02-12T20:31:46.008995451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:46.009117 env[1138]: time="2024-02-12T20:31:46.009065466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:46.009117 env[1138]: time="2024-02-12T20:31:46.009082403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:46.009691 env[1138]: time="2024-02-12T20:31:46.009624338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064 pid=3789 runtime=io.containerd.runc.v2 Feb 12 20:31:46.034445 systemd[1]: Started cri-containerd-843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064.scope. Feb 12 20:31:46.077982 env[1138]: time="2024-02-12T20:31:46.077542017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-stlq6,Uid:11c7c08d-5440-4e3d-bc4c-867997861322,Namespace:kube-system,Attempt:0,} returns sandbox id \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\"" Feb 12 20:31:46.084551 env[1138]: time="2024-02-12T20:31:46.084479002Z" level=info msg="CreateContainer within sandbox \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:31:46.103281 env[1138]: time="2024-02-12T20:31:46.103198873Z" level=info msg="CreateContainer within sandbox \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\"" Feb 12 20:31:46.105963 env[1138]: time="2024-02-12T20:31:46.104189036Z" level=info msg="StartContainer for \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\"" Feb 12 20:31:46.129260 systemd[1]: Started cri-containerd-3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32.scope. Feb 12 20:31:46.147947 systemd[1]: cri-containerd-3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32.scope: Deactivated successfully. Feb 12 20:31:46.176986 env[1138]: time="2024-02-12T20:31:46.176888733Z" level=info msg="shim disconnected" id=3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32 Feb 12 20:31:46.176986 env[1138]: time="2024-02-12T20:31:46.176988346Z" level=warning msg="cleaning up after shim disconnected" id=3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32 namespace=k8s.io Feb 12 20:31:46.177402 env[1138]: time="2024-02-12T20:31:46.177002212Z" level=info msg="cleaning up dead shim" Feb 12 20:31:46.194383 env[1138]: time="2024-02-12T20:31:46.194307950Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3846 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:31:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:31:46.195073 env[1138]: time="2024-02-12T20:31:46.194880500Z" level=error msg="copy shim log" error="read /proc/self/fd/32: file already closed" Feb 12 20:31:46.197055 env[1138]: time="2024-02-12T20:31:46.196989515Z" level=error msg="Failed to pipe stdout of container \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\"" error="reading from a closed fifo" Feb 12 20:31:46.197318 env[1138]: time="2024-02-12T20:31:46.197032266Z" level=error msg="Failed to pipe stderr of container \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\"" error="reading from a closed fifo" Feb 12 20:31:46.200465 env[1138]: time="2024-02-12T20:31:46.200384411Z" level=error msg="StartContainer for \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:31:46.201070 kubelet[2023]: E0212 20:31:46.201039 2023 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32" Feb 12 20:31:46.201241 kubelet[2023]: E0212 20:31:46.201225 2023 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:31:46.201241 kubelet[2023]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:31:46.201241 kubelet[2023]: rm /hostbin/cilium-mount Feb 12 20:31:46.201407 kubelet[2023]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rz7bc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-stlq6_kube-system(11c7c08d-5440-4e3d-bc4c-867997861322): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:31:46.201407 kubelet[2023]: E0212 20:31:46.201298 2023 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-stlq6" podUID=11c7c08d-5440-4e3d-bc4c-867997861322 Feb 12 20:31:47.051283 env[1138]: time="2024-02-12T20:31:47.051207884Z" level=info msg="StopPodSandbox for \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\"" Feb 12 20:31:47.051983 env[1138]: time="2024-02-12T20:31:47.051902184Z" level=info msg="Container to stop \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:31:47.054965 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064-shm.mount: Deactivated successfully. Feb 12 20:31:47.067565 systemd[1]: cri-containerd-843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064.scope: Deactivated successfully. Feb 12 20:31:47.107141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064-rootfs.mount: Deactivated successfully. Feb 12 20:31:47.116501 env[1138]: time="2024-02-12T20:31:47.116435633Z" level=info msg="shim disconnected" id=843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064 Feb 12 20:31:47.116501 env[1138]: time="2024-02-12T20:31:47.116506552Z" level=warning msg="cleaning up after shim disconnected" id=843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064 namespace=k8s.io Feb 12 20:31:47.116832 env[1138]: time="2024-02-12T20:31:47.116520389Z" level=info msg="cleaning up dead shim" Feb 12 20:31:47.128142 env[1138]: time="2024-02-12T20:31:47.128078101Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\n" Feb 12 20:31:47.128574 env[1138]: time="2024-02-12T20:31:47.128533382Z" level=info msg="TearDown network for sandbox \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\" successfully" Feb 12 20:31:47.128720 env[1138]: time="2024-02-12T20:31:47.128573575Z" level=info msg="StopPodSandbox for \"843d8ad62de5b77887340d0f5096f5839ea49364922aef905b22b8a20cc2a064\" returns successfully" Feb 12 20:31:47.187781 kubelet[2023]: I0212 20:31:47.187710 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cni-path\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187798 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187851 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187883 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-hostproc\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187933 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-cgroup\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187967 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-net\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188079 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-etc-cni-netd\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188183 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-kernel\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188224 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-run\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188270 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz7bc\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-kube-api-access-rz7bc\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188300 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-bpf-maps\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188331 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-lib-modules\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188369 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188405 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188439 2023 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-xtables-lock\") pod \"11c7c08d-5440-4e3d-bc4c-867997861322\" (UID: \"11c7c08d-5440-4e3d-bc4c-867997861322\") " Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.188504 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.188633 kubelet[2023]: I0212 20:31:47.187737 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cni-path" (OuterVolumeSpecName: "cni-path") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189675 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189731 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-hostproc" (OuterVolumeSpecName: "hostproc") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189760 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189787 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189813 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189839 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.189871 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: I0212 20:31:47.190189 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:31:47.191934 kubelet[2023]: W0212 20:31:47.190630 2023 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/11c7c08d-5440-4e3d-bc4c-867997861322/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:31:47.194354 kubelet[2023]: I0212 20:31:47.194314 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:31:47.197713 systemd[1]: var-lib-kubelet-pods-11c7c08d\x2d5440\x2d4e3d\x2dbc4c\x2d867997861322-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:31:47.200834 kubelet[2023]: I0212 20:31:47.200785 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:31:47.205157 systemd[1]: var-lib-kubelet-pods-11c7c08d\x2d5440\x2d4e3d\x2dbc4c\x2d867997861322-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drz7bc.mount: Deactivated successfully. Feb 12 20:31:47.206930 kubelet[2023]: I0212 20:31:47.206872 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-kube-api-access-rz7bc" (OuterVolumeSpecName: "kube-api-access-rz7bc") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "kube-api-access-rz7bc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:31:47.211724 kubelet[2023]: I0212 20:31:47.211683 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:31:47.211941 kubelet[2023]: I0212 20:31:47.211767 2023 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "11c7c08d-5440-4e3d-bc4c-867997861322" (UID: "11c7c08d-5440-4e3d-bc4c-867997861322"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:31:47.229145 kubelet[2023]: I0212 20:31:47.229103 2023 setters.go:548] "Node became not ready" node="ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:31:47.228998525 +0000 UTC m=+122.948716869 LastTransitionTime:2024-02-12 20:31:47.228998525 +0000 UTC m=+122.948716869 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:31:47.289067 kubelet[2023]: I0212 20:31:47.289011 2023 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rz7bc\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-kube-api-access-rz7bc\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289067 kubelet[2023]: I0212 20:31:47.289074 2023 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-bpf-maps\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289097 2023 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-lib-modules\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289115 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-config-path\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289132 2023 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-xtables-lock\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289148 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-ipsec-secrets\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289164 2023 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cni-path\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289181 2023 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11c7c08d-5440-4e3d-bc4c-867997861322-clustermesh-secrets\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289199 2023 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11c7c08d-5440-4e3d-bc4c-867997861322-hubble-tls\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289215 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-cgroup\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289253 2023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-net\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289278 2023 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-hostproc\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289302 2023 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-host-proc-sys-kernel\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289319 2023 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-cilium-run\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.289412 kubelet[2023]: I0212 20:31:47.289336 2023 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11c7c08d-5440-4e3d-bc4c-867997861322-etc-cni-netd\") on node \"ci-3510-3-2-f80330683fdd88fa6e06.c.flatcar-212911.internal\" DevicePath \"\"" Feb 12 20:31:47.890012 systemd[1]: var-lib-kubelet-pods-11c7c08d\x2d5440\x2d4e3d\x2dbc4c\x2d867997861322-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:31:47.890170 systemd[1]: var-lib-kubelet-pods-11c7c08d\x2d5440\x2d4e3d\x2dbc4c\x2d867997861322-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:31:48.055630 kubelet[2023]: I0212 20:31:48.055591 2023 scope.go:115] "RemoveContainer" containerID="3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32" Feb 12 20:31:48.057946 env[1138]: time="2024-02-12T20:31:48.057759206Z" level=info msg="RemoveContainer for \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\"" Feb 12 20:31:48.062427 systemd[1]: Removed slice kubepods-burstable-pod11c7c08d_5440_4e3d_bc4c_867997861322.slice. Feb 12 20:31:48.065778 env[1138]: time="2024-02-12T20:31:48.065711377Z" level=info msg="RemoveContainer for \"3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32\" returns successfully" Feb 12 20:31:48.107769 kubelet[2023]: I0212 20:31:48.107705 2023 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:31:48.108064 kubelet[2023]: E0212 20:31:48.107886 2023 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11c7c08d-5440-4e3d-bc4c-867997861322" containerName="mount-cgroup" Feb 12 20:31:48.112790 kubelet[2023]: I0212 20:31:48.112751 2023 memory_manager.go:346] "RemoveStaleState removing state" podUID="11c7c08d-5440-4e3d-bc4c-867997861322" containerName="mount-cgroup" Feb 12 20:31:48.128896 systemd[1]: Created slice kubepods-burstable-pod2fdb60d1_e82b_43d0_bd7b_ae35457ec5d7.slice. Feb 12 20:31:48.194944 kubelet[2023]: I0212 20:31:48.194717 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-cilium-run\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.194944 kubelet[2023]: I0212 20:31:48.194893 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-cilium-cgroup\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.194970 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-host-proc-sys-net\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195004 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-bpf-maps\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195034 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-hubble-tls\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195072 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4pd2\" (UniqueName: \"kubernetes.io/projected/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-kube-api-access-q4pd2\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195107 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-hostproc\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195141 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-cni-path\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195174 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-etc-cni-netd\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195205 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-lib-modules\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195246 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-clustermesh-secrets\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195284 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-xtables-lock\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195317 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-cilium-config-path\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195353 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-host-proc-sys-kernel\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.195624 kubelet[2023]: I0212 20:31:48.195395 2023 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7-cilium-ipsec-secrets\") pod \"cilium-czn24\" (UID: \"2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7\") " pod="kube-system/cilium-czn24" Feb 12 20:31:48.437597 env[1138]: time="2024-02-12T20:31:48.437537479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-czn24,Uid:2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7,Namespace:kube-system,Attempt:0,}" Feb 12 20:31:48.459600 env[1138]: time="2024-02-12T20:31:48.459393469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:31:48.459600 env[1138]: time="2024-02-12T20:31:48.459450420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:31:48.459600 env[1138]: time="2024-02-12T20:31:48.459470235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:31:48.460417 env[1138]: time="2024-02-12T20:31:48.460347476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1 pid=3905 runtime=io.containerd.runc.v2 Feb 12 20:31:48.479390 systemd[1]: Started cri-containerd-345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1.scope. Feb 12 20:31:48.510973 env[1138]: time="2024-02-12T20:31:48.510919085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-czn24,Uid:2fdb60d1-e82b-43d0-bd7b-ae35457ec5d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\"" Feb 12 20:31:48.516821 env[1138]: time="2024-02-12T20:31:48.516772067Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:31:48.534638 env[1138]: time="2024-02-12T20:31:48.534512370Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6\"" Feb 12 20:31:48.538065 env[1138]: time="2024-02-12T20:31:48.538013660Z" level=info msg="StartContainer for \"cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6\"" Feb 12 20:31:48.563549 systemd[1]: Started cri-containerd-cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6.scope. Feb 12 20:31:48.596318 kubelet[2023]: I0212 20:31:48.596272 2023 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=11c7c08d-5440-4e3d-bc4c-867997861322 path="/var/lib/kubelet/pods/11c7c08d-5440-4e3d-bc4c-867997861322/volumes" Feb 12 20:31:48.611942 env[1138]: time="2024-02-12T20:31:48.610478797Z" level=info msg="StartContainer for \"cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6\" returns successfully" Feb 12 20:31:48.622281 systemd[1]: cri-containerd-cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6.scope: Deactivated successfully. Feb 12 20:31:48.671496 env[1138]: time="2024-02-12T20:31:48.671424780Z" level=info msg="shim disconnected" id=cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6 Feb 12 20:31:48.671496 env[1138]: time="2024-02-12T20:31:48.671496158Z" level=warning msg="cleaning up after shim disconnected" id=cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6 namespace=k8s.io Feb 12 20:31:48.672200 env[1138]: time="2024-02-12T20:31:48.671511326Z" level=info msg="cleaning up dead shim" Feb 12 20:31:48.694344 env[1138]: time="2024-02-12T20:31:48.694268949Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3988 runtime=io.containerd.runc.v2\n" Feb 12 20:31:49.069528 env[1138]: time="2024-02-12T20:31:49.069395140Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:31:49.118835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472890117.mount: Deactivated successfully. Feb 12 20:31:49.139092 env[1138]: time="2024-02-12T20:31:49.138972241Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c\"" Feb 12 20:31:49.143972 env[1138]: time="2024-02-12T20:31:49.142096958Z" level=info msg="StartContainer for \"7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c\"" Feb 12 20:31:49.185541 systemd[1]: Started cri-containerd-7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c.scope. Feb 12 20:31:49.234865 env[1138]: time="2024-02-12T20:31:49.234789985Z" level=info msg="StartContainer for \"7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c\" returns successfully" Feb 12 20:31:49.240953 systemd[1]: cri-containerd-7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c.scope: Deactivated successfully. Feb 12 20:31:49.272168 env[1138]: time="2024-02-12T20:31:49.272092122Z" level=info msg="shim disconnected" id=7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c Feb 12 20:31:49.272168 env[1138]: time="2024-02-12T20:31:49.272156577Z" level=warning msg="cleaning up after shim disconnected" id=7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c namespace=k8s.io Feb 12 20:31:49.272168 env[1138]: time="2024-02-12T20:31:49.272170693Z" level=info msg="cleaning up dead shim" Feb 12 20:31:49.284390 kubelet[2023]: W0212 20:31:49.284333 2023 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c7c08d_5440_4e3d_bc4c_867997861322.slice/cri-containerd-3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32.scope WatchSource:0}: container "3fba79e863888df2b81e1c9487fa2d4f7068cef2192e1df8ad50010dd4d42a32" in namespace "k8s.io": not found Feb 12 20:31:49.286382 env[1138]: time="2024-02-12T20:31:49.286342406Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4052 runtime=io.containerd.runc.v2\n" Feb 12 20:31:49.771888 kubelet[2023]: E0212 20:31:49.771842 2023 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:31:49.890849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c-rootfs.mount: Deactivated successfully. Feb 12 20:31:50.074008 env[1138]: time="2024-02-12T20:31:50.073951446Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:31:50.099954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2405101510.mount: Deactivated successfully. Feb 12 20:31:50.107724 env[1138]: time="2024-02-12T20:31:50.107650918Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64\"" Feb 12 20:31:50.111947 env[1138]: time="2024-02-12T20:31:50.108874705Z" level=info msg="StartContainer for \"cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64\"" Feb 12 20:31:50.162041 systemd[1]: Started cri-containerd-cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64.scope. Feb 12 20:31:50.206461 systemd[1]: cri-containerd-cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64.scope: Deactivated successfully. Feb 12 20:31:50.207594 env[1138]: time="2024-02-12T20:31:50.207527299Z" level=info msg="StartContainer for \"cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64\" returns successfully" Feb 12 20:31:50.241614 env[1138]: time="2024-02-12T20:31:50.241547751Z" level=info msg="shim disconnected" id=cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64 Feb 12 20:31:50.241614 env[1138]: time="2024-02-12T20:31:50.241604306Z" level=warning msg="cleaning up after shim disconnected" id=cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64 namespace=k8s.io Feb 12 20:31:50.241614 env[1138]: time="2024-02-12T20:31:50.241618347Z" level=info msg="cleaning up dead shim" Feb 12 20:31:50.253215 env[1138]: time="2024-02-12T20:31:50.253146772Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4111 runtime=io.containerd.runc.v2\n" Feb 12 20:31:50.890991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64-rootfs.mount: Deactivated successfully. Feb 12 20:31:51.080075 env[1138]: time="2024-02-12T20:31:51.079893384Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:31:51.105012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572689949.mount: Deactivated successfully. Feb 12 20:31:51.118324 env[1138]: time="2024-02-12T20:31:51.118247005Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c\"" Feb 12 20:31:51.119225 env[1138]: time="2024-02-12T20:31:51.119170492Z" level=info msg="StartContainer for \"707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c\"" Feb 12 20:31:51.151594 systemd[1]: Started cri-containerd-707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c.scope. Feb 12 20:31:51.200323 env[1138]: time="2024-02-12T20:31:51.199201071Z" level=info msg="StartContainer for \"707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c\" returns successfully" Feb 12 20:31:51.199534 systemd[1]: cri-containerd-707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c.scope: Deactivated successfully. Feb 12 20:31:51.233238 env[1138]: time="2024-02-12T20:31:51.233166233Z" level=info msg="shim disconnected" id=707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c Feb 12 20:31:51.233238 env[1138]: time="2024-02-12T20:31:51.233236520Z" level=warning msg="cleaning up after shim disconnected" id=707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c namespace=k8s.io Feb 12 20:31:51.233717 env[1138]: time="2024-02-12T20:31:51.233250562Z" level=info msg="cleaning up dead shim" Feb 12 20:31:51.244885 env[1138]: time="2024-02-12T20:31:51.244824153Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:31:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4167 runtime=io.containerd.runc.v2\n" Feb 12 20:31:51.891117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c-rootfs.mount: Deactivated successfully. Feb 12 20:31:52.085313 env[1138]: time="2024-02-12T20:31:52.085243944Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:31:52.109029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002798866.mount: Deactivated successfully. Feb 12 20:31:52.126231 env[1138]: time="2024-02-12T20:31:52.126165713Z" level=info msg="CreateContainer within sandbox \"345d05b0d08685837447d1fa76a3550e9e2c380590126fbea8d2ca6cb9fe3eb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1\"" Feb 12 20:31:52.127537 env[1138]: time="2024-02-12T20:31:52.127496244Z" level=info msg="StartContainer for \"69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1\"" Feb 12 20:31:52.157171 systemd[1]: Started cri-containerd-69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1.scope. Feb 12 20:31:52.201250 env[1138]: time="2024-02-12T20:31:52.201190708Z" level=info msg="StartContainer for \"69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1\" returns successfully" Feb 12 20:31:52.399842 kubelet[2023]: W0212 20:31:52.399781 2023 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb60d1_e82b_43d0_bd7b_ae35457ec5d7.slice/cri-containerd-cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6.scope WatchSource:0}: task cff396036f64b406d142ac34b8b1b8263f5a1dd750f706d7155cfe8d282ef8a6 not found: not found Feb 12 20:31:52.679212 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:31:53.114444 kubelet[2023]: I0212 20:31:53.114387 2023 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-czn24" podStartSLOduration=5.114331935 podCreationTimestamp="2024-02-12 20:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:31:53.113792582 +0000 UTC m=+128.833510944" watchObservedRunningTime="2024-02-12 20:31:53.114331935 +0000 UTC m=+128.834050297" Feb 12 20:31:53.607589 systemd[1]: run-containerd-runc-k8s.io-69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1-runc.PwBINu.mount: Deactivated successfully. Feb 12 20:31:55.508684 kubelet[2023]: W0212 20:31:55.508634 2023 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb60d1_e82b_43d0_bd7b_ae35457ec5d7.slice/cri-containerd-7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c.scope WatchSource:0}: task 7d277573e10ccaf3d8596cf70b4d7ce20e777abc2b9f4dfcd954fca66c4f566c not found: not found Feb 12 20:31:55.709530 systemd-networkd[1019]: lxc_health: Link UP Feb 12 20:31:55.750956 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:31:55.755984 systemd-networkd[1019]: lxc_health: Gained carrier Feb 12 20:31:57.509150 systemd-networkd[1019]: lxc_health: Gained IPv6LL Feb 12 20:31:58.176136 systemd[1]: run-containerd-runc-k8s.io-69e53bc95daef0726af5e2c95fbab1c93974a3d178dc614964ec9012b12cf6e1-runc.N1gMhF.mount: Deactivated successfully. Feb 12 20:31:58.618782 kubelet[2023]: W0212 20:31:58.618736 2023 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb60d1_e82b_43d0_bd7b_ae35457ec5d7.slice/cri-containerd-cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64.scope WatchSource:0}: task cd4b2aef62959e075f492184427db8ce02482083d1087f19b9cb6a1b028e4e64 not found: not found Feb 12 20:32:01.729218 kubelet[2023]: W0212 20:32:01.729168 2023 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fdb60d1_e82b_43d0_bd7b_ae35457ec5d7.slice/cri-containerd-707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c.scope WatchSource:0}: task 707f2237a9b6141dec0e23ff81e9d901ec4c2b2d94f1ec7d32927ad39a3d5d0c not found: not found Feb 12 20:32:02.873373 sshd[3768]: pam_unix(sshd:session): session closed for user core Feb 12 20:32:02.877667 systemd[1]: sshd@24-10.128.0.46:22-147.75.109.163:49196.service: Deactivated successfully. Feb 12 20:32:02.878877 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 20:32:02.880192 systemd-logind[1122]: Session 25 logged out. Waiting for processes to exit. Feb 12 20:32:02.881682 systemd-logind[1122]: Removed session 25.