Apr 12 18:43:20.108029 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:43:20.108081 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:43:20.108098 kernel: BIOS-provided physical RAM map: Apr 12 18:43:20.108111 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 12 18:43:20.108123 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 12 18:43:20.108136 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 12 18:43:20.108155 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 12 18:43:20.108168 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 12 18:43:20.108182 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 12 18:43:20.108195 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Apr 12 18:43:20.108209 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 12 18:43:20.108222 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 12 18:43:20.108237 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 12 18:43:20.108256 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 12 18:43:20.108276 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 12 18:43:20.108292 kernel: NX (Execute Disable) protection: active Apr 12 18:43:20.108315 kernel: efi: EFI v2.70 by EDK II Apr 12 18:43:20.108330 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe36f198 RNG=0xbfb73018 TPMEventLog=0xbe2b3018 Apr 12 18:43:20.108345 kernel: random: crng init done Apr 12 18:43:20.108359 kernel: SMBIOS 2.4 present. Apr 12 18:43:20.108373 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Apr 12 18:43:20.108387 kernel: Hypervisor detected: KVM Apr 12 18:43:20.108406 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:43:20.108420 kernel: kvm-clock: cpu 0, msr 89191001, primary cpu clock Apr 12 18:43:20.108443 kernel: kvm-clock: using sched offset of 13283874269 cycles Apr 12 18:43:20.108458 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:43:20.108472 kernel: tsc: Detected 2299.998 MHz processor Apr 12 18:43:20.108488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:43:20.108503 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:43:20.108519 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 12 18:43:20.108535 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:43:20.108559 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 12 18:43:20.108583 kernel: Using GB pages for direct mapping Apr 12 18:43:20.108599 kernel: Secure boot disabled Apr 12 18:43:20.108614 kernel: ACPI: Early table checksum verification disabled Apr 12 18:43:20.108628 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 12 18:43:20.108642 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 12 18:43:20.108657 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 12 18:43:20.108671 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 12 18:43:20.108686 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 12 18:43:20.108711 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Apr 12 18:43:20.108726 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 12 18:43:20.108742 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 12 18:43:20.108757 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 12 18:43:20.108773 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 12 18:43:20.108789 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 12 18:43:20.108808 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 12 18:43:20.108822 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 12 18:43:20.108838 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 12 18:43:20.108854 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 12 18:43:20.108870 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 12 18:43:20.108886 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 12 18:43:20.108917 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 12 18:43:20.108945 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 12 18:43:20.108962 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 12 18:43:20.108982 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 12 18:43:20.108998 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 12 18:43:20.109014 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 12 18:43:20.109031 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 12 18:43:20.109046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 12 18:43:20.109063 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 12 18:43:20.109079 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 12 18:43:20.109095 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 12 18:43:20.109112 kernel: Zone ranges: Apr 12 18:43:20.109131 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:43:20.109147 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 12 18:43:20.109163 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 12 18:43:20.109179 kernel: Movable zone start for each node Apr 12 18:43:20.109195 kernel: Early memory node ranges Apr 12 18:43:20.109212 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 12 18:43:20.109228 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 12 18:43:20.109244 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 12 18:43:20.109261 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 12 18:43:20.109280 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 12 18:43:20.109296 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 12 18:43:20.109321 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:43:20.109337 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 12 18:43:20.109353 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 12 18:43:20.109370 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 12 18:43:20.109386 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 12 18:43:20.109402 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 12 18:43:20.109419 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:43:20.109439 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 12 18:43:20.109456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:43:20.109472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:43:20.109486 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:43:20.109500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:43:20.109523 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:43:20.109538 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 12 18:43:20.109554 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 12 18:43:20.109571 kernel: Booting paravirtualized kernel on KVM Apr 12 18:43:20.109589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:43:20.109605 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Apr 12 18:43:20.109620 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Apr 12 18:43:20.109634 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Apr 12 18:43:20.109650 kernel: pcpu-alloc: [0] 0 1 Apr 12 18:43:20.109666 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:43:20.109682 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:43:20.109697 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Apr 12 18:43:20.109712 kernel: Policy zone: Normal Apr 12 18:43:20.109733 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:43:20.109749 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:43:20.109765 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 12 18:43:20.109780 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 12 18:43:20.109795 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:43:20.109812 kernel: Memory: 7534424K/7860584K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 325900K reserved, 0K cma-reserved) Apr 12 18:43:20.109827 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:43:20.109842 kernel: Kernel/User page tables isolation: enabled Apr 12 18:43:20.109861 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:43:20.109876 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:43:20.109891 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:43:20.109973 kernel: rcu: RCU event tracing is enabled. Apr 12 18:43:20.109987 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:43:20.110002 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:43:20.110017 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:43:20.110034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:43:20.110051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:43:20.110073 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 12 18:43:20.110103 kernel: Console: colour dummy device 80x25 Apr 12 18:43:20.110122 kernel: printk: console [ttyS0] enabled Apr 12 18:43:20.110143 kernel: ACPI: Core revision 20210730 Apr 12 18:43:20.110160 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:43:20.110177 kernel: x2apic enabled Apr 12 18:43:20.110193 kernel: Switched APIC routing to physical x2apic. Apr 12 18:43:20.110209 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 12 18:43:20.110227 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 12 18:43:20.110244 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 12 18:43:20.110264 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 12 18:43:20.110279 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 12 18:43:20.110296 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:43:20.110323 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 12 18:43:20.110339 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 12 18:43:20.110356 kernel: Spectre V2 : Mitigation: IBRS Apr 12 18:43:20.110374 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:43:20.110395 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:43:20.110414 kernel: RETBleed: Mitigation: IBRS Apr 12 18:43:20.110431 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 12 18:43:20.110468 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Apr 12 18:43:20.110487 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Apr 12 18:43:20.110505 kernel: MDS: Mitigation: Clear CPU buffers Apr 12 18:43:20.110523 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 12 18:43:20.110541 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:43:20.110562 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:43:20.110577 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:43:20.110594 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:43:20.110612 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 12 18:43:20.110630 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:43:20.110648 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:43:20.110665 kernel: LSM: Security Framework initializing Apr 12 18:43:20.110683 kernel: SELinux: Initializing. Apr 12 18:43:20.110700 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 12 18:43:20.110721 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 12 18:43:20.110739 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 12 18:43:20.110758 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 12 18:43:20.110776 kernel: signal: max sigframe size: 1776 Apr 12 18:43:20.110793 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:43:20.110810 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 12 18:43:20.110827 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:43:20.110844 kernel: x86: Booting SMP configuration: Apr 12 18:43:20.110862 kernel: .... node #0, CPUs: #1 Apr 12 18:43:20.110883 kernel: kvm-clock: cpu 1, msr 89191041, secondary cpu clock Apr 12 18:43:20.110914 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 12 18:43:20.110944 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 12 18:43:20.110962 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:43:20.110980 kernel: smpboot: Max logical packages: 1 Apr 12 18:43:20.110998 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 12 18:43:20.111015 kernel: devtmpfs: initialized Apr 12 18:43:20.111033 kernel: x86/mm: Memory block size: 128MB Apr 12 18:43:20.111051 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 12 18:43:20.111073 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:43:20.111091 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:43:20.111109 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:43:20.111127 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:43:20.111145 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:43:20.111163 kernel: audit: type=2000 audit(1712947398.606:1): state=initialized audit_enabled=0 res=1 Apr 12 18:43:20.111180 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:43:20.111197 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:43:20.111215 kernel: cpuidle: using governor menu Apr 12 18:43:20.111236 kernel: ACPI: bus type PCI registered Apr 12 18:43:20.111253 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:43:20.111271 kernel: dca service started, version 1.12.1 Apr 12 18:43:20.111288 kernel: PCI: Using configuration type 1 for base access Apr 12 18:43:20.111312 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:43:20.111330 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:43:20.111348 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:43:20.111366 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:43:20.111384 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:43:20.111404 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:43:20.111420 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:43:20.111438 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:43:20.111455 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:43:20.111473 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:43:20.111490 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 12 18:43:20.111508 kernel: ACPI: Interpreter enabled Apr 12 18:43:20.111526 kernel: ACPI: PM: (supports S0 S3 S5) Apr 12 18:43:20.111543 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:43:20.111565 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:43:20.111583 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 12 18:43:20.111601 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:43:20.111849 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:43:20.112033 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Apr 12 18:43:20.112057 kernel: PCI host bridge to bus 0000:00 Apr 12 18:43:20.112211 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:43:20.112370 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:43:20.112523 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:43:20.112672 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 12 18:43:20.112821 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:43:20.113327 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:43:20.113791 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 12 18:43:20.114242 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 12 18:43:20.114429 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 12 18:43:20.114612 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 12 18:43:20.114782 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 12 18:43:20.131040 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 12 18:43:20.131264 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 12 18:43:20.131444 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 12 18:43:20.131637 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 12 18:43:20.131814 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 12 18:43:20.132011 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 12 18:43:20.132222 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 12 18:43:20.132243 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:43:20.132261 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:43:20.132279 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:43:20.132299 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:43:20.132323 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:43:20.132338 kernel: iommu: Default domain type: Translated Apr 12 18:43:20.132360 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:43:20.132386 kernel: vgaarb: loaded Apr 12 18:43:20.132407 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:43:20.132423 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:43:20.132440 kernel: PTP clock support registered Apr 12 18:43:20.132455 kernel: Registered efivars operations Apr 12 18:43:20.132474 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:43:20.132489 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:43:20.132505 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 12 18:43:20.132522 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 12 18:43:20.132538 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 12 18:43:20.132555 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 12 18:43:20.132569 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:43:20.132586 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:43:20.132603 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:43:20.132624 kernel: pnp: PnP ACPI init Apr 12 18:43:20.132642 kernel: pnp: PnP ACPI: found 7 devices Apr 12 18:43:20.132660 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:43:20.132677 kernel: NET: Registered PF_INET protocol family Apr 12 18:43:20.132696 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 12 18:43:20.132713 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 12 18:43:20.132731 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:43:20.132748 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 12 18:43:20.132766 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Apr 12 18:43:20.132786 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 12 18:43:20.132803 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 12 18:43:20.132821 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 12 18:43:20.132838 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:43:20.132856 kernel: NET: Registered PF_XDP protocol family Apr 12 18:43:20.134098 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:43:20.134267 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:43:20.134429 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:43:20.134583 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 12 18:43:20.134754 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:43:20.134779 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:43:20.134797 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 12 18:43:20.134815 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Apr 12 18:43:20.134833 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 12 18:43:20.134851 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 12 18:43:20.134869 kernel: clocksource: Switched to clocksource tsc Apr 12 18:43:20.134891 kernel: Initialise system trusted keyrings Apr 12 18:43:20.137969 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 12 18:43:20.137998 kernel: Key type asymmetric registered Apr 12 18:43:20.138017 kernel: Asymmetric key parser 'x509' registered Apr 12 18:43:20.138035 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:43:20.138053 kernel: io scheduler mq-deadline registered Apr 12 18:43:20.138081 kernel: io scheduler kyber registered Apr 12 18:43:20.138099 kernel: io scheduler bfq registered Apr 12 18:43:20.138117 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:43:20.138141 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 12 18:43:20.138355 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 12 18:43:20.138381 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 12 18:43:20.138548 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 12 18:43:20.138572 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 12 18:43:20.138744 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 12 18:43:20.138767 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:43:20.138785 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:43:20.138804 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 12 18:43:20.138826 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 12 18:43:20.138844 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 12 18:43:20.139034 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 12 18:43:20.139060 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:43:20.139079 kernel: i8042: Warning: Keylock active Apr 12 18:43:20.139096 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:43:20.139115 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:43:20.139280 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 12 18:43:20.139449 kernel: rtc_cmos 00:00: registered as rtc0 Apr 12 18:43:20.139602 kernel: rtc_cmos 00:00: setting system clock to 2024-04-12T18:43:19 UTC (1712947399) Apr 12 18:43:20.139764 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 12 18:43:20.139787 kernel: intel_pstate: CPU model not supported Apr 12 18:43:20.139805 kernel: pstore: Registered efi as persistent store backend Apr 12 18:43:20.139822 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:43:20.139840 kernel: Segment Routing with IPv6 Apr 12 18:43:20.139858 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:43:20.139880 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:43:20.139898 kernel: Key type dns_resolver registered Apr 12 18:43:20.139926 kernel: IPI shorthand broadcast: enabled Apr 12 18:43:20.139941 kernel: sched_clock: Marking stable (725683987, 155049196)->(950408576, -69675393) Apr 12 18:43:20.139957 kernel: registered taskstats version 1 Apr 12 18:43:20.139974 kernel: Loading compiled-in X.509 certificates Apr 12 18:43:20.139989 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:43:20.140006 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:43:20.140023 kernel: Key type .fscrypt registered Apr 12 18:43:20.140043 kernel: Key type fscrypt-provisioning registered Apr 12 18:43:20.140060 kernel: pstore: Using crash dump compression: deflate Apr 12 18:43:20.140076 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:43:20.140092 kernel: ima: No architecture policies found Apr 12 18:43:20.140109 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:43:20.140125 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:43:20.140142 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:43:20.140158 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:43:20.140178 kernel: Run /init as init process Apr 12 18:43:20.140194 kernel: with arguments: Apr 12 18:43:20.140210 kernel: /init Apr 12 18:43:20.140226 kernel: with environment: Apr 12 18:43:20.140242 kernel: HOME=/ Apr 12 18:43:20.140258 kernel: TERM=linux Apr 12 18:43:20.140274 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:43:20.140294 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:43:20.140324 systemd[1]: Detected virtualization kvm. Apr 12 18:43:20.140341 systemd[1]: Detected architecture x86-64. Apr 12 18:43:20.140357 systemd[1]: Running in initrd. Apr 12 18:43:20.140374 systemd[1]: No hostname configured, using default hostname. Apr 12 18:43:20.140390 systemd[1]: Hostname set to . Apr 12 18:43:20.140409 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:43:20.140426 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:43:20.140442 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:43:20.140462 systemd[1]: Reached target cryptsetup.target. Apr 12 18:43:20.140479 systemd[1]: Reached target paths.target. Apr 12 18:43:20.140495 systemd[1]: Reached target slices.target. Apr 12 18:43:20.140513 systemd[1]: Reached target swap.target. Apr 12 18:43:20.140529 systemd[1]: Reached target timers.target. Apr 12 18:43:20.140547 systemd[1]: Listening on iscsid.socket. Apr 12 18:43:20.140565 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:43:20.140582 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:43:20.140602 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:43:20.140619 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:43:20.140636 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:43:20.140653 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:43:20.140670 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:43:20.140687 systemd[1]: Reached target sockets.target. Apr 12 18:43:20.140704 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:43:20.140721 systemd[1]: Finished network-cleanup.service. Apr 12 18:43:20.140741 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:43:20.140758 systemd[1]: Starting systemd-journald.service... Apr 12 18:43:20.140776 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:43:20.140810 systemd[1]: Starting systemd-resolved.service... Apr 12 18:43:20.140832 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:43:20.140849 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:43:20.140866 kernel: audit: type=1130 audit(1712947400.101:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.140887 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:43:20.142464 kernel: audit: type=1130 audit(1712947400.113:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.142495 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:43:20.142516 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:43:20.142542 systemd-journald[189]: Journal started Apr 12 18:43:20.142640 systemd-journald[189]: Runtime Journal (/run/log/journal/6c19443c0101c59c599e723260c5e385) is 8.0M, max 148.8M, 140.8M free. Apr 12 18:43:20.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.132406 systemd-modules-load[190]: Inserted module 'overlay' Apr 12 18:43:20.167068 kernel: audit: type=1130 audit(1712947400.148:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.167114 systemd[1]: Started systemd-journald.service. Apr 12 18:43:20.167142 kernel: audit: type=1130 audit(1712947400.155:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.157373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:43:20.166414 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:43:20.175099 kernel: audit: type=1130 audit(1712947400.163:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.195423 systemd-resolved[191]: Positive Trust Anchors: Apr 12 18:43:20.196016 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:43:20.196336 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:43:20.206380 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:43:20.237430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:43:20.237471 kernel: audit: type=1130 audit(1712947400.211:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.237498 kernel: Bridge firewalling registered Apr 12 18:43:20.237519 kernel: audit: type=1130 audit(1712947400.218:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.209602 systemd-resolved[191]: Defaulting to hostname 'linux'. Apr 12 18:43:20.213274 systemd[1]: Started systemd-resolved.service. Apr 12 18:43:20.247828 dracut-cmdline[205]: dracut-dracut-053 Apr 12 18:43:20.217945 systemd-modules-load[190]: Inserted module 'br_netfilter' Apr 12 18:43:20.220190 systemd[1]: Reached target nss-lookup.target. Apr 12 18:43:20.258038 kernel: SCSI subsystem initialized Apr 12 18:43:20.258084 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:43:20.228424 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:43:20.279666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:43:20.279742 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:43:20.281390 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:43:20.286246 systemd-modules-load[190]: Inserted module 'dm_multipath' Apr 12 18:43:20.287832 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:43:20.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.301347 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:43:20.313062 kernel: audit: type=1130 audit(1712947400.298:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.314585 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:43:20.326083 kernel: audit: type=1130 audit(1712947400.317:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.348941 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:43:20.369935 kernel: iscsi: registered transport (tcp) Apr 12 18:43:20.397251 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:43:20.397341 kernel: QLogic iSCSI HBA Driver Apr 12 18:43:20.442898 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:43:20.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.446189 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:43:20.505971 kernel: raid6: avx2x4 gen() 17878 MB/s Apr 12 18:43:20.522945 kernel: raid6: avx2x4 xor() 8115 MB/s Apr 12 18:43:20.539951 kernel: raid6: avx2x2 gen() 18004 MB/s Apr 12 18:43:20.556943 kernel: raid6: avx2x2 xor() 18663 MB/s Apr 12 18:43:20.573944 kernel: raid6: avx2x1 gen() 14172 MB/s Apr 12 18:43:20.590943 kernel: raid6: avx2x1 xor() 16122 MB/s Apr 12 18:43:20.607941 kernel: raid6: sse2x4 gen() 11062 MB/s Apr 12 18:43:20.624942 kernel: raid6: sse2x4 xor() 6637 MB/s Apr 12 18:43:20.641953 kernel: raid6: sse2x2 gen() 11939 MB/s Apr 12 18:43:20.658931 kernel: raid6: sse2x2 xor() 7418 MB/s Apr 12 18:43:20.675949 kernel: raid6: sse2x1 gen() 10571 MB/s Apr 12 18:43:20.693509 kernel: raid6: sse2x1 xor() 5195 MB/s Apr 12 18:43:20.693548 kernel: raid6: using algorithm avx2x2 gen() 18004 MB/s Apr 12 18:43:20.693582 kernel: raid6: .... xor() 18663 MB/s, rmw enabled Apr 12 18:43:20.694461 kernel: raid6: using avx2x2 recovery algorithm Apr 12 18:43:20.709946 kernel: xor: automatically using best checksumming function avx Apr 12 18:43:20.817951 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:43:20.829139 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:43:20.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.827000 audit: BPF prog-id=7 op=LOAD Apr 12 18:43:20.828000 audit: BPF prog-id=8 op=LOAD Apr 12 18:43:20.830634 systemd[1]: Starting systemd-udevd.service... Apr 12 18:43:20.848478 systemd-udevd[388]: Using default interface naming scheme 'v252'. Apr 12 18:43:20.855640 systemd[1]: Started systemd-udevd.service. Apr 12 18:43:20.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.876350 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:43:20.893163 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 12 18:43:20.932181 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:43:20.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:20.933445 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:43:21.000544 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:43:21.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:21.085928 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:43:21.120936 kernel: scsi host0: Virtio SCSI HBA Apr 12 18:43:21.191928 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:43:21.197216 kernel: AES CTR mode by8 optimization enabled Apr 12 18:43:21.215926 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 12 18:43:21.281785 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Apr 12 18:43:21.282136 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 12 18:43:21.282411 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 12 18:43:21.286936 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 12 18:43:21.287214 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 12 18:43:21.314812 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:43:21.314924 kernel: GPT:17805311 != 25165823 Apr 12 18:43:21.314948 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:43:21.320911 kernel: GPT:17805311 != 25165823 Apr 12 18:43:21.324626 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:43:21.329882 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:43:21.340634 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 12 18:43:21.391926 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Apr 12 18:43:21.407318 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:43:21.422147 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:43:21.440229 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:43:21.457340 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:43:21.471074 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:43:21.490153 systemd[1]: Starting disk-uuid.service... Apr 12 18:43:21.520934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:43:21.521116 disk-uuid[510]: Primary Header is updated. Apr 12 18:43:21.521116 disk-uuid[510]: Secondary Entries is updated. Apr 12 18:43:21.521116 disk-uuid[510]: Secondary Header is updated. Apr 12 18:43:21.547015 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:43:22.548897 disk-uuid[511]: The operation has completed successfully. Apr 12 18:43:22.558089 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 12 18:43:22.619105 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:43:22.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:22.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:22.619268 systemd[1]: Finished disk-uuid.service. Apr 12 18:43:22.632330 systemd[1]: Starting verity-setup.service... Apr 12 18:43:22.658949 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 12 18:43:22.734898 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:43:22.737326 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:43:22.757408 systemd[1]: Finished verity-setup.service. Apr 12 18:43:22.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:22.839950 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:43:22.840284 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:43:22.840685 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:43:22.889096 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:43:22.889137 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:43:22.889160 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:43:22.841623 systemd[1]: Starting ignition-setup.service... Apr 12 18:43:22.911056 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 18:43:22.854304 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:43:22.916574 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:43:22.935286 systemd[1]: Finished ignition-setup.service. Apr 12 18:43:22.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:22.936745 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:43:22.970823 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:43:22.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:22.978000 audit: BPF prog-id=9 op=LOAD Apr 12 18:43:22.981234 systemd[1]: Starting systemd-networkd.service... Apr 12 18:43:23.016618 systemd-networkd[683]: lo: Link UP Apr 12 18:43:23.016638 systemd-networkd[683]: lo: Gained carrier Apr 12 18:43:23.017645 systemd-networkd[683]: Enumeration completed Apr 12 18:43:23.018100 systemd-networkd[683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:43:23.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.018432 systemd[1]: Started systemd-networkd.service. Apr 12 18:43:23.020541 systemd-networkd[683]: eth0: Link UP Apr 12 18:43:23.020550 systemd-networkd[683]: eth0: Gained carrier Apr 12 18:43:23.032047 systemd-networkd[683]: eth0: DHCPv4 address 10.128.0.15/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 12 18:43:23.039404 systemd[1]: Reached target network.target. Apr 12 18:43:23.065092 systemd[1]: Starting iscsiuio.service... Apr 12 18:43:23.112283 systemd[1]: Started iscsiuio.service. Apr 12 18:43:23.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.134360 systemd[1]: Starting iscsid.service... Apr 12 18:43:23.141500 systemd[1]: Started iscsid.service. Apr 12 18:43:23.156096 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:43:23.156096 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Apr 12 18:43:23.156096 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:43:23.156096 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:43:23.156096 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:43:23.156096 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:43:23.156096 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:43:23.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.202652 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:43:23.233766 ignition[653]: Ignition 2.14.0 Apr 12 18:43:23.229477 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:43:23.233780 ignition[653]: Stage: fetch-offline Apr 12 18:43:23.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.237380 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:43:23.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.233853 ignition[653]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:23.255248 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:43:23.233892 ignition[653]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:23.275269 systemd[1]: Reached target remote-fs.target. Apr 12 18:43:23.252111 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:23.303491 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:43:23.252320 ignition[653]: parsed url from cmdline: "" Apr 12 18:43:23.323518 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:43:23.252326 ignition[653]: no config URL provided Apr 12 18:43:23.338428 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:43:23.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.252333 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:43:23.355394 systemd[1]: Starting ignition-fetch.service... Apr 12 18:43:23.252344 ignition[653]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:43:23.427552 unknown[708]: fetched base config from "system" Apr 12 18:43:23.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.252353 ignition[653]: failed to fetch config: resource requires networking Apr 12 18:43:23.427579 unknown[708]: fetched base config from "system" Apr 12 18:43:23.252509 ignition[653]: Ignition finished successfully Apr 12 18:43:23.427604 unknown[708]: fetched user config from "gcp" Apr 12 18:43:23.368430 ignition[708]: Ignition 2.14.0 Apr 12 18:43:23.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.430541 systemd[1]: Finished ignition-fetch.service. Apr 12 18:43:23.368440 ignition[708]: Stage: fetch Apr 12 18:43:23.442416 systemd[1]: Starting ignition-kargs.service... Apr 12 18:43:23.368570 ignition[708]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:23.472434 systemd[1]: Finished ignition-kargs.service. Apr 12 18:43:23.368596 ignition[708]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:23.489381 systemd[1]: Starting ignition-disks.service... Apr 12 18:43:23.375878 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:23.526697 systemd[1]: Finished ignition-disks.service. Apr 12 18:43:23.376115 ignition[708]: parsed url from cmdline: "" Apr 12 18:43:23.533474 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:43:23.376120 ignition[708]: no config URL provided Apr 12 18:43:23.554125 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:43:23.376128 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:43:23.568146 systemd[1]: Reached target local-fs.target. Apr 12 18:43:23.376157 ignition[708]: no config at "/usr/lib/ignition/user.ign" Apr 12 18:43:23.579161 systemd[1]: Reached target sysinit.target. Apr 12 18:43:23.376193 ignition[708]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 12 18:43:23.579287 systemd[1]: Reached target basic.target. Apr 12 18:43:23.385158 ignition[708]: GET result: OK Apr 12 18:43:23.600338 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:43:23.385282 ignition[708]: parsing config with SHA512: 53660c4499740be698089acd5f070f3567213587d0c84e789dfe12cf3bf5ed6a1f03434acd7abb550fda77c93882cfa3ac7e8e89dfcb85384f2d87921837c047 Apr 12 18:43:23.428784 ignition[708]: fetch: fetch complete Apr 12 18:43:23.428791 ignition[708]: fetch: fetch passed Apr 12 18:43:23.428845 ignition[708]: Ignition finished successfully Apr 12 18:43:23.455828 ignition[714]: Ignition 2.14.0 Apr 12 18:43:23.455837 ignition[714]: Stage: kargs Apr 12 18:43:23.455993 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:23.456027 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:23.462598 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:23.464184 ignition[714]: kargs: kargs passed Apr 12 18:43:23.464240 ignition[714]: Ignition finished successfully Apr 12 18:43:23.501460 ignition[720]: Ignition 2.14.0 Apr 12 18:43:23.501470 ignition[720]: Stage: disks Apr 12 18:43:23.501612 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:23.501645 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:23.509015 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:23.510627 ignition[720]: disks: disks passed Apr 12 18:43:23.510684 ignition[720]: Ignition finished successfully Apr 12 18:43:23.640856 systemd-fsck[728]: ROOT: clean, 612/1628000 files, 124056/1617920 blocks Apr 12 18:43:23.799913 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:43:23.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:23.801214 systemd[1]: Mounting sysroot.mount... Apr 12 18:43:23.837090 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:43:23.831386 systemd[1]: Mounted sysroot.mount. Apr 12 18:43:23.844344 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:43:23.863358 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:43:23.874680 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:43:23.874737 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:43:23.874779 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:43:23.974165 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (734) Apr 12 18:43:23.974211 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:43:23.974235 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:43:23.974258 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:43:23.974289 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 18:43:23.891361 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:43:23.915594 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:43:24.004155 initrd-setup-root[739]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:43:23.941764 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:43:24.022132 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:43:23.983954 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:43:24.041130 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:43:24.052078 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:43:24.050072 systemd-networkd[683]: eth0: Gained IPv6LL Apr 12 18:43:24.073223 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:43:24.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:24.074423 systemd[1]: Starting ignition-mount.service... Apr 12 18:43:24.096164 systemd[1]: Starting sysroot-boot.service... Apr 12 18:43:24.110670 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:43:24.110835 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:43:24.136069 ignition[800]: INFO : Ignition 2.14.0 Apr 12 18:43:24.136069 ignition[800]: INFO : Stage: mount Apr 12 18:43:24.136069 ignition[800]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:24.136069 ignition[800]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:24.239133 kernel: kauditd_printk_skb: 24 callbacks suppressed Apr 12 18:43:24.239179 kernel: audit: type=1130 audit(1712947404.142:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:24.239206 kernel: audit: type=1130 audit(1712947404.198:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:24.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:24.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:24.239354 ignition[800]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:24.239354 ignition[800]: INFO : mount: mount passed Apr 12 18:43:24.239354 ignition[800]: INFO : Ignition finished successfully Apr 12 18:43:24.308074 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (810) Apr 12 18:43:24.308110 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:43:24.308125 kernel: BTRFS info (device sda6): using free space tree Apr 12 18:43:24.308150 kernel: BTRFS info (device sda6): has skinny extents Apr 12 18:43:24.139057 systemd[1]: Finished sysroot-boot.service. Apr 12 18:43:24.322101 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 12 18:43:24.144614 systemd[1]: Finished ignition-mount.service. Apr 12 18:43:24.201566 systemd[1]: Starting ignition-files.service... Apr 12 18:43:24.250404 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:43:24.324233 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:43:24.363066 ignition[829]: INFO : Ignition 2.14.0 Apr 12 18:43:24.363066 ignition[829]: INFO : Stage: files Apr 12 18:43:24.363066 ignition[829]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:24.363066 ignition[829]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:24.363066 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:24.363066 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:43:24.377158 unknown[829]: wrote ssh authorized keys file for user: core Apr 12 18:43:24.439043 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:43:24.439043 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:43:24.439043 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:43:24.439043 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:43:24.439043 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:43:24.439043 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:43:24.439043 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:43:24.691312 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 12 18:43:24.953629 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:43:24.978071 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:43:24.978071 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:43:24.978071 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:43:25.042810 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 12 18:43:25.157739 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:43:25.185060 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (831) Apr 12 18:43:25.183765 systemd[1]: mnt-oem2722932049.mount: Deactivated successfully. Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/hosts" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(6): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722932049" Apr 12 18:43:25.194103 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(5): op(6): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722932049": device or resource busy Apr 12 18:43:25.194103 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(5): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2722932049", trying btrfs: device or resource busy Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722932049" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(7): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2722932049" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [started] unmounting "/mnt/oem2722932049" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): op(8): [finished] unmounting "/mnt/oem2722932049" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/hosts" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:43:25.194103 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:43:25.396121 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Apr 12 18:43:25.470753 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:43:25.495057 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:43:25.495057 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:43:25.495057 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:43:25.571090 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 12 18:43:25.833958 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(a): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2124535932" Apr 12 18:43:25.858050 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2124535932": device or resource busy Apr 12 18:43:25.858050 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2124535932", trying btrfs: device or resource busy Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2124535932" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2124535932" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem2124535932" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem2124535932" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:43:25.858050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:43:25.847944 systemd[1]: mnt-oem2124535932.mount: Deactivated successfully. Apr 12 18:43:26.082050 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Apr 12 18:43:33.449171 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(f): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Apr 12 18:43:33.474087 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:43:33.474087 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:43:33.474087 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:43:33.474087 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Apr 12 18:43:39.270898 ignition[829]: DEBUG : files: createFilesystemsFiles: createFiles: op(10): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Apr 12 18:43:39.295081 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:43:39.295081 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:43:39.295081 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:43:39.295081 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:43:39.295081 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:43:39.439425 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK Apr 12 18:43:39.536348 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:43:39.536348 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(16): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(16): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem936035650" Apr 12 18:43:39.577079 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem936035650": device or resource busy Apr 12 18:43:39.577079 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem936035650", trying btrfs: device or resource busy Apr 12 18:43:39.577079 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem936035650" Apr 12 18:43:40.016220 kernel: audit: type=1130 audit(1712947419.622:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016263 kernel: audit: type=1130 audit(1712947419.733:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016280 kernel: audit: type=1130 audit(1712947419.770:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016295 kernel: audit: type=1131 audit(1712947419.770:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016309 kernel: audit: type=1130 audit(1712947419.904:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016324 kernel: audit: type=1131 audit(1712947419.926:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem936035650" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem936035650" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem936035650" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3886718470" Apr 12 18:43:40.016582 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(1c): op(1d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3886718470": device or resource busy Apr 12 18:43:40.016582 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(1c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3886718470", trying btrfs: device or resource busy Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3886718470" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3886718470" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [started] unmounting "/mnt/oem3886718470" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): op(1f): [finished] unmounting "/mnt/oem3886718470" Apr 12 18:43:40.016582 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(1c): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Apr 12 18:43:40.016582 ignition[829]: INFO : files: op(20): [started] processing unit "oem-gce.service" Apr 12 18:43:40.389082 kernel: audit: type=1130 audit(1712947420.047:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.389137 kernel: audit: type=1131 audit(1712947420.170:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.580007 systemd[1]: mnt-oem936035650.mount: Deactivated successfully. Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(20): [finished] processing unit "oem-gce.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(21): [started] processing unit "oem-gce-enable-oslogin.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(21): [finished] processing unit "oem-gce-enable-oslogin.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(22): [started] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(22): [finished] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(23): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(23): op(24): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(23): op(24): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(23): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(25): [started] processing unit "prepare-critools.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(25): op(26): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(25): op(26): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(25): [finished] processing unit "prepare-critools.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(27): [started] processing unit "prepare-helm.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(27): op(28): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(27): op(28): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(27): [finished] processing unit "prepare-helm.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(29): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(29): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Apr 12 18:43:40.405092 ignition[829]: INFO : files: op(2a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:43:40.810126 kernel: audit: type=1131 audit(1712947420.507:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.810184 kernel: audit: type=1131 audit(1712947420.583:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.605482 systemd[1]: mnt-oem3886718470.mount: Deactivated successfully. Apr 12 18:43:40.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2b): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2b): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2c): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2c): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2d): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2d): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2e): [started] setting preset to enabled for "oem-gce.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: op(2e): [finished] setting preset to enabled for "oem-gce.service" Apr 12 18:43:40.827256 ignition[829]: INFO : files: createResultFile: createFiles: op(2f): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:43:40.827256 ignition[829]: INFO : files: createResultFile: createFiles: op(2f): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:43:40.827256 ignition[829]: INFO : files: files passed Apr 12 18:43:40.827256 ignition[829]: INFO : Ignition finished successfully Apr 12 18:43:40.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.080459 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:43:39.618136 systemd[1]: Finished ignition-files.service. Apr 12 18:43:41.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.635078 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:43:41.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.666322 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:43:39.667463 systemd[1]: Starting ignition-quench.service... Apr 12 18:43:39.710564 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:43:41.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.735596 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:43:41.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.735744 systemd[1]: Finished ignition-quench.service. Apr 12 18:43:41.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.212000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:43:41.221531 ignition[867]: INFO : Ignition 2.14.0 Apr 12 18:43:41.221531 ignition[867]: INFO : Stage: umount Apr 12 18:43:41.221531 ignition[867]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:43:41.221531 ignition[867]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Apr 12 18:43:41.221531 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 12 18:43:41.221531 ignition[867]: INFO : umount: umount passed Apr 12 18:43:41.221531 ignition[867]: INFO : Ignition finished successfully Apr 12 18:43:41.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.772480 systemd[1]: Reached target ignition-complete.target. Apr 12 18:43:39.857278 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:43:41.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.894084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:43:39.894221 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:43:39.928148 systemd[1]: Reached target initrd-fs.target. Apr 12 18:43:41.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.966306 systemd[1]: Reached target initrd.target. Apr 12 18:43:41.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.993370 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:43:41.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:39.994847 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:43:40.023413 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:43:41.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.050683 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:43:41.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.092204 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:43:41.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:41.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:40.103459 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:43:40.125502 systemd[1]: Stopped target timers.target. Apr 12 18:43:40.149415 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:43:40.149605 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:43:40.172750 systemd[1]: Stopped target initrd.target. Apr 12 18:43:40.215432 systemd[1]: Stopped target basic.target. Apr 12 18:43:41.579073 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Apr 12 18:43:41.579121 iscsid[693]: iscsid shutting down. Apr 12 18:43:40.240470 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:43:40.291478 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:43:40.317401 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:43:40.354401 systemd[1]: Stopped target remote-fs.target. Apr 12 18:43:40.372435 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:43:40.397383 systemd[1]: Stopped target sysinit.target. Apr 12 18:43:40.413343 systemd[1]: Stopped target local-fs.target. Apr 12 18:43:40.424418 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:43:40.460405 systemd[1]: Stopped target swap.target. Apr 12 18:43:40.473376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:43:40.473570 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:43:40.509617 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:43:40.572326 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:43:40.572577 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:43:40.585571 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:43:40.585844 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:43:40.623505 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:43:40.623692 systemd[1]: Stopped ignition-files.service. Apr 12 18:43:40.673019 systemd[1]: Stopping ignition-mount.service... Apr 12 18:43:40.718112 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:43:40.718382 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:43:40.745244 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:43:40.762216 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:43:40.762525 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:43:40.796540 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:43:40.796722 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:43:40.822571 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 12 18:43:40.823490 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:43:40.823604 systemd[1]: Stopped ignition-mount.service. Apr 12 18:43:40.835762 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:43:40.835880 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:43:40.856821 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:43:40.857039 systemd[1]: Stopped ignition-disks.service. Apr 12 18:43:40.890218 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:43:40.890304 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:43:40.911215 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:43:40.911287 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:43:40.932206 systemd[1]: Stopped target network.target. Apr 12 18:43:40.953107 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:43:40.953225 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:43:40.974200 systemd[1]: Stopped target paths.target. Apr 12 18:43:40.994105 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:43:40.998019 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:43:41.016103 systemd[1]: Stopped target slices.target. Apr 12 18:43:41.037096 systemd[1]: Stopped target sockets.target. Apr 12 18:43:41.058184 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:43:41.058241 systemd[1]: Closed iscsid.socket. Apr 12 18:43:41.072165 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:43:41.072227 systemd[1]: Closed iscsiuio.socket. Apr 12 18:43:41.087147 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:43:41.087249 systemd[1]: Stopped ignition-setup.service. Apr 12 18:43:41.109211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:43:41.109290 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:43:41.124471 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:43:41.127993 systemd-networkd[683]: eth0: DHCPv6 lease lost Apr 12 18:43:41.142415 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:43:41.166877 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:43:41.167040 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:43:41.182067 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:43:41.182209 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:43:41.198010 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:43:41.198124 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:43:41.215528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:43:41.215594 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:43:41.230141 systemd[1]: Stopping network-cleanup.service... Apr 12 18:43:41.236282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:43:41.236379 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:43:41.268344 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:43:41.268426 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:43:41.305344 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:43:41.305414 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:43:41.312545 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:43:41.335750 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 12 18:43:41.336420 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:43:41.336586 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:43:41.351704 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:43:41.351801 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:43:41.366222 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:43:41.366289 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:43:41.384122 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:43:41.384219 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:43:41.399240 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:43:41.399332 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:43:41.414221 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:43:41.414301 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:43:41.430342 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:43:41.449067 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:43:41.449283 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:43:41.465982 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:43:41.466116 systemd[1]: Stopped network-cleanup.service. Apr 12 18:43:41.480573 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:43:41.480690 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:43:41.496474 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:43:41.512314 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:43:41.535428 systemd[1]: Switching root. Apr 12 18:43:41.583643 systemd-journald[189]: Journal stopped Apr 12 18:43:46.265142 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:43:46.265271 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:43:46.265305 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:43:46.265334 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:43:46.265355 kernel: SELinux: policy capability open_perms=1 Apr 12 18:43:46.265383 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:43:46.265407 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:43:46.265434 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:43:46.265458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:43:46.265481 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:43:46.265510 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:43:46.265535 systemd[1]: Successfully loaded SELinux policy in 111.662ms. Apr 12 18:43:46.265587 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.812ms. Apr 12 18:43:46.265613 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:43:46.265637 systemd[1]: Detected virtualization kvm. Apr 12 18:43:46.265662 systemd[1]: Detected architecture x86-64. Apr 12 18:43:46.265687 systemd[1]: Detected first boot. Apr 12 18:43:46.265712 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:43:46.265735 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:43:46.265764 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:43:46.265789 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:43:46.265829 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:43:46.265856 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:43:46.265886 kernel: kauditd_printk_skb: 46 callbacks suppressed Apr 12 18:43:46.265945 kernel: audit: type=1334 audit(1712947425.332:86): prog-id=12 op=LOAD Apr 12 18:43:46.265969 kernel: audit: type=1334 audit(1712947425.336:87): prog-id=3 op=UNLOAD Apr 12 18:43:46.265990 kernel: audit: type=1334 audit(1712947425.338:88): prog-id=13 op=LOAD Apr 12 18:43:46.266017 kernel: audit: type=1334 audit(1712947425.345:89): prog-id=14 op=LOAD Apr 12 18:43:46.266040 kernel: audit: type=1334 audit(1712947425.345:90): prog-id=4 op=UNLOAD Apr 12 18:43:46.266063 kernel: audit: type=1334 audit(1712947425.345:91): prog-id=5 op=UNLOAD Apr 12 18:43:46.266085 kernel: audit: type=1334 audit(1712947425.352:92): prog-id=15 op=LOAD Apr 12 18:43:46.266108 kernel: audit: type=1334 audit(1712947425.352:93): prog-id=12 op=UNLOAD Apr 12 18:43:46.266132 kernel: audit: type=1334 audit(1712947425.359:94): prog-id=16 op=LOAD Apr 12 18:43:46.266154 kernel: audit: type=1334 audit(1712947425.366:95): prog-id=17 op=LOAD Apr 12 18:43:46.266177 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:43:46.266208 systemd[1]: Stopped iscsiuio.service. Apr 12 18:43:46.266232 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:43:46.266255 systemd[1]: Stopped iscsid.service. Apr 12 18:43:46.266281 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:43:46.266305 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:43:46.266328 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:43:46.266352 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:43:46.266375 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:43:46.266406 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Apr 12 18:43:46.266429 systemd[1]: Created slice system-getty.slice. Apr 12 18:43:46.266450 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:43:46.266473 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:43:46.266498 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:43:46.266526 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:43:46.266549 systemd[1]: Created slice user.slice. Apr 12 18:43:46.266591 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:43:46.266618 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:43:46.266647 systemd[1]: Set up automount boot.automount. Apr 12 18:43:46.266670 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:43:46.266693 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:43:46.266717 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:43:46.266742 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:43:46.266766 systemd[1]: Reached target integritysetup.target. Apr 12 18:43:46.266789 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:43:46.266824 systemd[1]: Reached target remote-fs.target. Apr 12 18:43:46.266847 systemd[1]: Reached target slices.target. Apr 12 18:43:46.266874 systemd[1]: Reached target swap.target. Apr 12 18:43:46.266897 systemd[1]: Reached target torcx.target. Apr 12 18:43:46.266940 systemd[1]: Reached target veritysetup.target. Apr 12 18:43:46.266962 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:43:46.266984 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:43:46.267006 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:43:46.267028 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:43:46.267052 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:43:46.267076 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:43:46.267099 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:43:46.267128 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:43:46.267151 systemd[1]: Mounting media.mount... Apr 12 18:43:46.267174 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:43:46.267198 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:43:46.267223 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:43:46.267246 systemd[1]: Mounting tmp.mount... Apr 12 18:43:46.267271 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:43:46.267295 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:43:46.267318 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:43:46.267346 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:43:46.267369 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:43:46.267393 systemd[1]: Starting modprobe@drm.service... Apr 12 18:43:46.267416 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:43:46.267440 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:43:46.267464 systemd[1]: Starting modprobe@loop.service... Apr 12 18:43:46.267488 kernel: fuse: init (API version 7.34) Apr 12 18:43:46.267512 kernel: loop: module loaded Apr 12 18:43:46.267536 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:43:46.267565 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:43:46.267589 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:43:46.267614 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:43:46.267638 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:43:46.267661 systemd[1]: Stopped systemd-journald.service. Apr 12 18:43:46.267696 systemd[1]: Starting systemd-journald.service... Apr 12 18:43:46.267723 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:43:46.267746 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:43:46.267777 systemd-journald[991]: Journal started Apr 12 18:43:46.267927 systemd-journald[991]: Runtime Journal (/run/log/journal/6c19443c0101c59c599e723260c5e385) is 8.0M, max 148.8M, 140.8M free. Apr 12 18:43:41.583000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:43:41.900000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:43:42.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:43:42.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:43:42.042000 audit: BPF prog-id=10 op=LOAD Apr 12 18:43:42.042000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:43:42.042000 audit: BPF prog-id=11 op=LOAD Apr 12 18:43:42.042000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:43:42.200000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:43:42.200000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:43:42.200000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:43:42.211000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:43:42.211000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:43:42.211000 audit: CWD cwd="/" Apr 12 18:43:42.211000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:42.211000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:42.211000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:43:45.332000 audit: BPF prog-id=12 op=LOAD Apr 12 18:43:45.336000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:43:45.338000 audit: BPF prog-id=13 op=LOAD Apr 12 18:43:45.345000 audit: BPF prog-id=14 op=LOAD Apr 12 18:43:45.345000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:43:45.345000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:43:45.352000 audit: BPF prog-id=15 op=LOAD Apr 12 18:43:45.352000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:43:45.359000 audit: BPF prog-id=16 op=LOAD Apr 12 18:43:45.366000 audit: BPF prog-id=17 op=LOAD Apr 12 18:43:45.366000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:43:45.366000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:43:45.373000 audit: BPF prog-id=18 op=LOAD Apr 12 18:43:45.373000 audit: BPF prog-id=15 op=UNLOAD Apr 12 18:43:45.380000 audit: BPF prog-id=19 op=LOAD Apr 12 18:43:45.387000 audit: BPF prog-id=20 op=LOAD Apr 12 18:43:45.387000 audit: BPF prog-id=16 op=UNLOAD Apr 12 18:43:45.387000 audit: BPF prog-id=17 op=UNLOAD Apr 12 18:43:45.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:45.419000 audit: BPF prog-id=18 op=UNLOAD Apr 12 18:43:45.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:45.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:45.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:45.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.219000 audit: BPF prog-id=21 op=LOAD Apr 12 18:43:46.219000 audit: BPF prog-id=22 op=LOAD Apr 12 18:43:46.219000 audit: BPF prog-id=23 op=LOAD Apr 12 18:43:46.219000 audit: BPF prog-id=19 op=UNLOAD Apr 12 18:43:46.219000 audit: BPF prog-id=20 op=UNLOAD Apr 12 18:43:46.260000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:43:46.260000 audit[991]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffc1585cd60 a2=4000 a3=7ffc1585cdfc items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:43:46.260000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:43:45.331143 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:43:42.197033 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:43:45.390217 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:43:42.198157 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:43:42.198192 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:43:42.198246 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:43:42.198266 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:43:42.198325 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:43:42.198349 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:43:42.198642 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:43:42.198711 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:43:42.198736 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:43:42.200834 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:43:42.200922 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:43:42.200962 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:43:42.201410 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:43:42.201447 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:43:42.201476 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:43:44.720805 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:43:44.721280 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:43:44.721429 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:43:44.721665 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:43:44.721732 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:43:44.721802 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-04-12T18:43:44Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:43:46.279162 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:43:46.295009 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:43:46.314424 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:43:46.314542 systemd[1]: Stopped verity-setup.service. Apr 12 18:43:46.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.334938 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:43:46.344953 systemd[1]: Started systemd-journald.service. Apr 12 18:43:46.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.354408 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:43:46.362287 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:43:46.370285 systemd[1]: Mounted media.mount. Apr 12 18:43:46.377253 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:43:46.386217 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:43:46.395261 systemd[1]: Mounted tmp.mount. Apr 12 18:43:46.402404 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:43:46.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.411470 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:43:46.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.420463 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:43:46.420682 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:43:46.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.429505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:43:46.429727 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:43:46.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.438463 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:43:46.438675 systemd[1]: Finished modprobe@drm.service. Apr 12 18:43:46.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.447544 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:43:46.447760 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:43:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.456492 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:43:46.456726 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:43:46.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.466462 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:43:46.466675 systemd[1]: Finished modprobe@loop.service. Apr 12 18:43:46.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.475540 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:43:46.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.484469 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:43:46.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.493507 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:43:46.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.502522 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:43:46.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.511925 systemd[1]: Reached target network-pre.target. Apr 12 18:43:46.521649 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:43:46.531639 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:43:46.539078 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:43:46.542122 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:43:46.550861 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:43:46.560096 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:43:46.561927 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:43:46.570980 systemd-journald[991]: Time spent on flushing to /var/log/journal/6c19443c0101c59c599e723260c5e385 is 66.753ms for 1194 entries. Apr 12 18:43:46.570980 systemd-journald[991]: System Journal (/var/log/journal/6c19443c0101c59c599e723260c5e385) is 8.0M, max 584.8M, 576.8M free. Apr 12 18:43:46.675035 systemd-journald[991]: Received client request to flush runtime journal. Apr 12 18:43:46.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:46.569118 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:43:46.571009 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:43:46.587943 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:43:46.596991 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:43:46.679076 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:43:46.607418 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:43:46.616196 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:43:46.625442 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:43:46.634563 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:43:46.647765 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:43:46.658482 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:43:46.676479 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:43:46.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.251398 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:43:47.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.259000 audit: BPF prog-id=24 op=LOAD Apr 12 18:43:47.259000 audit: BPF prog-id=25 op=LOAD Apr 12 18:43:47.259000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:43:47.259000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:43:47.262054 systemd[1]: Starting systemd-udevd.service... Apr 12 18:43:47.286916 systemd-udevd[1009]: Using default interface naming scheme 'v252'. Apr 12 18:43:47.336674 systemd[1]: Started systemd-udevd.service. Apr 12 18:43:47.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.344000 audit: BPF prog-id=26 op=LOAD Apr 12 18:43:47.347891 systemd[1]: Starting systemd-networkd.service... Apr 12 18:43:47.362000 audit: BPF prog-id=27 op=LOAD Apr 12 18:43:47.362000 audit: BPF prog-id=28 op=LOAD Apr 12 18:43:47.362000 audit: BPF prog-id=29 op=LOAD Apr 12 18:43:47.365357 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:43:47.421009 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:43:47.422252 systemd[1]: Started systemd-userdbd.service. Apr 12 18:43:47.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.539088 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:43:47.576101 systemd-networkd[1023]: lo: Link UP Apr 12 18:43:47.576116 systemd-networkd[1023]: lo: Gained carrier Apr 12 18:43:47.577555 systemd-networkd[1023]: Enumeration completed Apr 12 18:43:47.577837 systemd[1]: Started systemd-networkd.service. Apr 12 18:43:47.579977 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:43:47.582229 systemd-networkd[1023]: eth0: Link UP Apr 12 18:43:47.582241 systemd-networkd[1023]: eth0: Gained carrier Apr 12 18:43:47.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.595178 systemd-networkd[1023]: eth0: DHCPv4 address 10.128.0.15/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 12 18:43:47.610939 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:43:47.619933 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 12 18:43:47.630008 kernel: ACPI: button: Sleep Button [SLPF] Apr 12 18:43:47.644973 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1015) Apr 12 18:43:47.608000 audit[1016]: AVC avc: denied { confidentiality } for pid=1016 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:43:47.608000 audit[1016]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c6e9d98d40 a1=32194 a2=7f3443435bc5 a3=5 items=108 ppid=1009 pid=1016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:43:47.608000 audit: CWD cwd="/" Apr 12 18:43:47.608000 audit: PATH item=0 name=(null) inode=31 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=1 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=2 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=3 name=(null) inode=14191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=4 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=5 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=6 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=7 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=8 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=9 name=(null) inode=14194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=10 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=11 name=(null) inode=14195 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=12 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=13 name=(null) inode=14196 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=14 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=15 name=(null) inode=14197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=16 name=(null) inode=14193 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=17 name=(null) inode=14198 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=18 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=19 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=20 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=21 name=(null) inode=14200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=22 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=23 name=(null) inode=14201 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=24 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=25 name=(null) inode=14202 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=26 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=27 name=(null) inode=14203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=28 name=(null) inode=14199 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=29 name=(null) inode=14204 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=30 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=31 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=32 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=33 name=(null) inode=14206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=34 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=35 name=(null) inode=14207 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=36 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=37 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=38 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=39 name=(null) inode=14209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=40 name=(null) inode=14205 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=41 name=(null) inode=14210 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=42 name=(null) inode=14190 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=43 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=44 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=45 name=(null) inode=14212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=46 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=47 name=(null) inode=14213 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=48 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=49 name=(null) inode=14214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=50 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=51 name=(null) inode=14215 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=52 name=(null) inode=14211 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=53 name=(null) inode=14216 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=54 name=(null) inode=31 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=55 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=56 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=57 name=(null) inode=14218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=58 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=59 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=60 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=61 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=62 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=63 name=(null) inode=14221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=64 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=65 name=(null) inode=14222 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=66 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=67 name=(null) inode=14223 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=68 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=69 name=(null) inode=14224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=70 name=(null) inode=14220 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=71 name=(null) inode=14225 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=72 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=73 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=74 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=75 name=(null) inode=14227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=76 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=77 name=(null) inode=14228 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=78 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=79 name=(null) inode=14229 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=80 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=81 name=(null) inode=14230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=82 name=(null) inode=14226 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=83 name=(null) inode=14231 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=84 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=85 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=86 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=87 name=(null) inode=14233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=88 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=89 name=(null) inode=14234 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=90 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=91 name=(null) inode=14235 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=92 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=93 name=(null) inode=14236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=94 name=(null) inode=14232 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=95 name=(null) inode=14237 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=96 name=(null) inode=14217 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=97 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=98 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=99 name=(null) inode=14239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=100 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=101 name=(null) inode=14240 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=102 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=103 name=(null) inode=14241 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=104 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=105 name=(null) inode=14242 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=106 name=(null) inode=14238 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PATH item=107 name=(null) inode=14243 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:43:47.608000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:43:47.722926 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 12 18:43:47.744976 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 12 18:43:47.746735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:43:47.754929 kernel: EDAC MC: Ver: 3.0.0 Apr 12 18:43:47.764932 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:43:47.780479 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:43:47.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.790872 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:43:47.819880 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:43:47.849309 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:43:47.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.858327 systemd[1]: Reached target cryptsetup.target. Apr 12 18:43:47.868685 systemd[1]: Starting lvm2-activation.service... Apr 12 18:43:47.874744 lvm[1046]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:43:47.906411 systemd[1]: Finished lvm2-activation.service. Apr 12 18:43:47.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:47.915263 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:43:47.924053 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:43:47.924106 systemd[1]: Reached target local-fs.target. Apr 12 18:43:47.932081 systemd[1]: Reached target machines.target. Apr 12 18:43:47.942778 systemd[1]: Starting ldconfig.service... Apr 12 18:43:47.952793 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:43:47.952917 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:43:47.954829 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:43:47.964048 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:43:47.976312 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:43:47.976781 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:43:47.976938 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:43:47.979000 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:43:47.979967 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1048 (bootctl) Apr 12 18:43:47.983050 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:43:48.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.007750 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:43:48.033163 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:43:48.044885 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:43:48.061992 systemd-tmpfiles[1052]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:43:48.140691 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Apr 12 18:43:48.140691 systemd-fsck[1056]: /dev/sda1: 789 files, 119240/258078 clusters Apr 12 18:43:48.144317 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:43:48.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.156115 systemd[1]: Mounting boot.mount... Apr 12 18:43:48.175451 systemd[1]: Mounted boot.mount. Apr 12 18:43:48.199960 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:43:48.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.342218 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:43:48.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.353274 systemd[1]: Starting audit-rules.service... Apr 12 18:43:48.361912 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:43:48.372115 systemd[1]: Starting oem-gce-enable-oslogin.service... Apr 12 18:43:48.383149 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:43:48.394000 audit: BPF prog-id=30 op=LOAD Apr 12 18:43:48.397769 systemd[1]: Starting systemd-resolved.service... Apr 12 18:43:48.404000 audit: BPF prog-id=31 op=LOAD Apr 12 18:43:48.408221 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:43:48.418141 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:43:48.426007 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:43:48.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.433000 audit[1079]: SYSTEM_BOOT pid=1079 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.435618 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Apr 12 18:43:48.435891 systemd[1]: Finished oem-gce-enable-oslogin.service. Apr 12 18:43:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.449949 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:43:48.453286 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:43:48.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.488431 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:43:48.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:43:48.503000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:43:48.503000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdef7a2820 a2=420 a3=0 items=0 ppid=1060 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:43:48.503000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:43:48.506111 augenrules[1090]: No rules Apr 12 18:43:48.507895 systemd[1]: Finished audit-rules.service. Apr 12 18:43:48.520778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:43:48.523561 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:43:48.594169 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:43:48.595953 systemd-timesyncd[1076]: Contacted time server 169.254.169.254:123 (169.254.169.254). Apr 12 18:43:48.596400 systemd-timesyncd[1076]: Initial clock synchronization to Fri 2024-04-12 18:43:48.350797 UTC. Apr 12 18:43:48.598161 systemd-resolved[1072]: Positive Trust Anchors: Apr 12 18:43:48.598181 systemd-resolved[1072]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:43:48.598232 systemd-resolved[1072]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:43:48.603348 systemd[1]: Reached target time-set.target. Apr 12 18:43:48.628768 systemd-resolved[1072]: Defaulting to hostname 'linux'. Apr 12 18:43:48.631575 systemd[1]: Started systemd-resolved.service. Apr 12 18:43:48.640306 ldconfig[1047]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:43:48.640199 systemd[1]: Reached target network.target. Apr 12 18:43:48.649114 systemd[1]: Reached target nss-lookup.target. Apr 12 18:43:48.658427 systemd[1]: Finished ldconfig.service. Apr 12 18:43:48.666873 systemd[1]: Starting systemd-update-done.service... Apr 12 18:43:48.676916 systemd[1]: Finished systemd-update-done.service. Apr 12 18:43:48.686278 systemd[1]: Reached target sysinit.target. Apr 12 18:43:48.695214 systemd[1]: Started motdgen.path. Apr 12 18:43:48.702169 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:43:48.712347 systemd[1]: Started logrotate.timer. Apr 12 18:43:48.719272 systemd[1]: Started mdadm.timer. Apr 12 18:43:48.726157 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:43:48.735124 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:43:48.735195 systemd[1]: Reached target paths.target. Apr 12 18:43:48.742121 systemd[1]: Reached target timers.target. Apr 12 18:43:48.749548 systemd[1]: Listening on dbus.socket. Apr 12 18:43:48.758537 systemd[1]: Starting docker.socket... Apr 12 18:43:48.770514 systemd[1]: Listening on sshd.socket. Apr 12 18:43:48.778289 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:43:48.779153 systemd[1]: Listening on docker.socket. Apr 12 18:43:48.786273 systemd[1]: Reached target sockets.target. Apr 12 18:43:48.795123 systemd[1]: Reached target basic.target. Apr 12 18:43:48.802181 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:43:48.802240 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:43:48.803972 systemd[1]: Starting containerd.service... Apr 12 18:43:48.812646 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Apr 12 18:43:48.824655 systemd[1]: Starting dbus.service... Apr 12 18:43:48.832622 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:43:48.844991 systemd[1]: Starting extend-filesystems.service... Apr 12 18:43:48.852095 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:43:48.857827 jq[1102]: false Apr 12 18:43:48.855176 systemd[1]: Starting motdgen.service... Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda1 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda2 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda3 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found usr Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda4 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda6 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda7 Apr 12 18:43:48.886153 extend-filesystems[1104]: Found sda9 Apr 12 18:43:48.886153 extend-filesystems[1104]: Checking size of /dev/sda9 Apr 12 18:43:49.142062 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Apr 12 18:43:49.142162 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Apr 12 18:43:48.862035 systemd[1]: Starting oem-gce.service... Apr 12 18:43:49.043977 dbus-daemon[1101]: [system] SELinux support is enabled Apr 12 18:43:49.151468 extend-filesystems[1104]: Resized partition /dev/sda9 Apr 12 18:43:48.870911 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:43:49.047916 dbus-daemon[1101]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1023 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 12 18:43:49.189697 extend-filesystems[1139]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:43:49.189697 extend-filesystems[1139]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 12 18:43:49.189697 extend-filesystems[1139]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 12 18:43:49.189697 extend-filesystems[1139]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Apr 12 18:43:48.882530 systemd[1]: Starting prepare-critools.service... Apr 12 18:43:49.065233 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 12 18:43:49.246585 extend-filesystems[1104]: Resized filesystem in /dev/sda9 Apr 12 18:43:48.890809 systemd[1]: Starting prepare-helm.service... Apr 12 18:43:48.899682 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:43:48.916292 systemd[1]: Starting sshd-keygen.service... Apr 12 18:43:48.923768 systemd[1]: Starting systemd-logind.service... Apr 12 18:43:48.930330 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:43:49.259365 update_engine[1126]: I0412 18:43:49.117998 1126 main.cc:92] Flatcar Update Engine starting Apr 12 18:43:49.259365 update_engine[1126]: I0412 18:43:49.123960 1126 update_check_scheduler.cc:74] Next update check in 3m58s Apr 12 18:43:48.930447 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 12 18:43:49.259971 jq[1128]: true Apr 12 18:43:48.931289 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:43:48.932733 systemd[1]: Starting update-engine.service... Apr 12 18:43:49.260613 tar[1133]: ./ Apr 12 18:43:49.260613 tar[1133]: ./loopback Apr 12 18:43:48.944718 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:43:48.956871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:43:49.261482 tar[1134]: crictl Apr 12 18:43:48.957228 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:43:49.261987 tar[1140]: linux-amd64/helm Apr 12 18:43:48.969778 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:43:48.970388 systemd[1]: Finished motdgen.service. Apr 12 18:43:49.262722 jq[1142]: true Apr 12 18:43:49.011016 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:43:49.266220 mkfs.ext4[1144]: mke2fs 1.46.5 (30-Dec-2021) Apr 12 18:43:49.266220 mkfs.ext4[1144]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Apr 12 18:43:49.266220 mkfs.ext4[1144]: Creating filesystem with 262144 4k blocks and 65536 inodes Apr 12 18:43:49.266220 mkfs.ext4[1144]: Filesystem UUID: e4cd0c89-ac7e-44d9-9e46-e36acf56f68d Apr 12 18:43:49.266220 mkfs.ext4[1144]: Superblock backups stored on blocks: Apr 12 18:43:49.266220 mkfs.ext4[1144]: 32768, 98304, 163840, 229376 Apr 12 18:43:49.266220 mkfs.ext4[1144]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 18:43:49.266220 mkfs.ext4[1144]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 18:43:49.266220 mkfs.ext4[1144]: Creating journal (8192 blocks): done Apr 12 18:43:49.266220 mkfs.ext4[1144]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Apr 12 18:43:49.011281 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:43:49.044298 systemd[1]: Started dbus.service. Apr 12 18:43:49.267199 bash[1164]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:43:49.064176 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:43:49.277984 kernel: loop0: detected capacity change from 0 to 2097152 Apr 12 18:43:49.064216 systemd[1]: Reached target system-config.target. Apr 12 18:43:49.072186 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:43:49.278669 umount[1170]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Apr 12 18:43:49.072234 systemd[1]: Reached target user-config.target. Apr 12 18:43:49.140759 systemd[1]: Starting systemd-hostnamed.service... Apr 12 18:43:49.148548 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:43:49.148806 systemd[1]: Finished extend-filesystems.service. Apr 12 18:43:49.158943 systemd[1]: Started update-engine.service. Apr 12 18:43:49.172615 systemd[1]: Started locksmithd.service. Apr 12 18:43:49.213243 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:43:49.298897 systemd-logind[1124]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:43:49.304215 env[1143]: time="2024-04-12T18:43:49.304145997Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:43:49.304741 systemd-logind[1124]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 12 18:43:49.304784 systemd-logind[1124]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:43:49.305277 systemd-logind[1124]: New seat seat0. Apr 12 18:43:49.307760 systemd[1]: Started systemd-logind.service. Apr 12 18:43:49.345952 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:43:49.402061 env[1143]: time="2024-04-12T18:43:49.401732541Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:43:49.402061 env[1143]: time="2024-04-12T18:43:49.401982104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404011 env[1143]: time="2024-04-12T18:43:49.403960463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404011 env[1143]: time="2024-04-12T18:43:49.404007270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404323 env[1143]: time="2024-04-12T18:43:49.404284393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404400 env[1143]: time="2024-04-12T18:43:49.404323250Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404400 env[1143]: time="2024-04-12T18:43:49.404352933Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:43:49.404400 env[1143]: time="2024-04-12T18:43:49.404371156Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.404550 env[1143]: time="2024-04-12T18:43:49.404484175Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.405171 env[1143]: time="2024-04-12T18:43:49.404805408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:43:49.405171 env[1143]: time="2024-04-12T18:43:49.405035150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:43:49.405171 env[1143]: time="2024-04-12T18:43:49.405063697Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:43:49.405171 env[1143]: time="2024-04-12T18:43:49.405143161Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:43:49.405171 env[1143]: time="2024-04-12T18:43:49.405163471Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419012919Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419069087Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419092787Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419142165Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419191410Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419215531Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419238738Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419260391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419282591Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419304881Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419328351Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419348998Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419502295Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:43:49.421365 env[1143]: time="2024-04-12T18:43:49.419624317Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420009385Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420054643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420079965Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420152469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420177395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420198692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420218647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420239131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420259837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420280345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420301879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420323800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420498948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420523770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423297 env[1143]: time="2024-04-12T18:43:49.420548291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.423967 env[1143]: time="2024-04-12T18:43:49.420569321Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:43:49.423967 env[1143]: time="2024-04-12T18:43:49.420593485Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:43:49.423967 env[1143]: time="2024-04-12T18:43:49.420612750Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:43:49.423967 env[1143]: time="2024-04-12T18:43:49.420641367Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:43:49.423967 env[1143]: time="2024-04-12T18:43:49.420688751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:43:49.426890 env[1143]: time="2024-04-12T18:43:49.426649919Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:43:49.426890 env[1143]: time="2024-04-12T18:43:49.426750915Z" level=info msg="Connect containerd service" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.427115579Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428149210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428223974Z" level=info msg="Start subscribing containerd event" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428283617Z" level=info msg="Start recovering state" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428358551Z" level=info msg="Start event monitor" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428372759Z" level=info msg="Start snapshots syncer" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428385571Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:43:49.431349 env[1143]: time="2024-04-12T18:43:49.428397294Z" level=info msg="Start streaming server" Apr 12 18:43:49.434392 env[1143]: time="2024-04-12T18:43:49.434367466Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:43:49.435264 env[1143]: time="2024-04-12T18:43:49.435240470Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:43:49.437110 env[1143]: time="2024-04-12T18:43:49.436761618Z" level=info msg="containerd successfully booted in 0.150635s" Apr 12 18:43:49.436867 systemd[1]: Started containerd.service. Apr 12 18:43:49.443756 tar[1133]: ./bandwidth Apr 12 18:43:49.458057 systemd-networkd[1023]: eth0: Gained IPv6LL Apr 12 18:43:49.464967 coreos-metadata[1100]: Apr 12 18:43:49.464 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 12 18:43:49.486005 coreos-metadata[1100]: Apr 12 18:43:49.485 INFO Fetch failed with 404: resource not found Apr 12 18:43:49.486279 coreos-metadata[1100]: Apr 12 18:43:49.486 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 12 18:43:49.496239 coreos-metadata[1100]: Apr 12 18:43:49.496 INFO Fetch successful Apr 12 18:43:49.496465 coreos-metadata[1100]: Apr 12 18:43:49.496 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 12 18:43:49.503140 coreos-metadata[1100]: Apr 12 18:43:49.502 INFO Fetch failed with 404: resource not found Apr 12 18:43:49.503377 coreos-metadata[1100]: Apr 12 18:43:49.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 12 18:43:49.504503 coreos-metadata[1100]: Apr 12 18:43:49.504 INFO Fetch failed with 404: resource not found Apr 12 18:43:49.504729 coreos-metadata[1100]: Apr 12 18:43:49.504 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 12 18:43:49.506090 coreos-metadata[1100]: Apr 12 18:43:49.505 INFO Fetch successful Apr 12 18:43:49.508783 unknown[1100]: wrote ssh authorized keys file for user: core Apr 12 18:43:49.555712 update-ssh-keys[1182]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:43:49.556848 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Apr 12 18:43:49.624228 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 12 18:43:49.624437 systemd[1]: Started systemd-hostnamed.service. Apr 12 18:43:49.625366 dbus-daemon[1101]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1166 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 12 18:43:49.637542 systemd[1]: Starting polkit.service... Apr 12 18:43:49.706676 polkitd[1183]: Started polkitd version 121 Apr 12 18:43:49.731676 polkitd[1183]: Loading rules from directory /etc/polkit-1/rules.d Apr 12 18:43:49.731778 polkitd[1183]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 12 18:43:49.735579 tar[1133]: ./ptp Apr 12 18:43:49.739985 polkitd[1183]: Finished loading, compiling and executing 2 rules Apr 12 18:43:49.740627 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 12 18:43:49.740855 systemd[1]: Started polkit.service. Apr 12 18:43:49.741235 polkitd[1183]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 12 18:43:49.772499 systemd-hostnamed[1166]: Hostname set to (transient) Apr 12 18:43:49.775961 systemd-resolved[1072]: System hostname changed to 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal'. Apr 12 18:43:49.886799 tar[1133]: ./vlan Apr 12 18:43:50.066601 tar[1133]: ./host-device Apr 12 18:43:50.239414 tar[1133]: ./tuning Apr 12 18:43:50.374814 tar[1133]: ./vrf Apr 12 18:43:50.502663 tar[1133]: ./sbr Apr 12 18:43:50.640438 tar[1133]: ./tap Apr 12 18:43:50.779685 tar[1133]: ./dhcp Apr 12 18:43:50.983689 tar[1140]: linux-amd64/LICENSE Apr 12 18:43:50.984203 tar[1140]: linux-amd64/README.md Apr 12 18:43:50.995484 systemd[1]: Finished prepare-helm.service. Apr 12 18:43:51.059312 systemd[1]: Finished prepare-critools.service. Apr 12 18:43:51.147751 tar[1133]: ./static Apr 12 18:43:51.225654 tar[1133]: ./firewall Apr 12 18:43:51.347455 tar[1133]: ./macvlan Apr 12 18:43:51.447476 tar[1133]: ./dummy Apr 12 18:43:51.555232 tar[1133]: ./bridge Apr 12 18:43:51.673958 tar[1133]: ./ipvlan Apr 12 18:43:51.782014 tar[1133]: ./portmap Apr 12 18:43:51.860622 tar[1133]: ./host-local Apr 12 18:43:51.955623 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:43:53.213146 locksmithd[1169]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:43:55.482458 sshd_keygen[1135]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:43:55.537526 systemd[1]: Finished sshd-keygen.service. Apr 12 18:43:55.548402 systemd[1]: Starting issuegen.service... Apr 12 18:43:55.557514 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:43:55.557705 systemd[1]: Finished issuegen.service. Apr 12 18:43:55.567545 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:43:55.579805 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:43:55.590540 systemd[1]: Started getty@tty1.service. Apr 12 18:43:55.600462 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:43:55.609404 systemd[1]: Reached target getty.target. Apr 12 18:43:55.765044 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Apr 12 18:43:57.809947 kernel: loop0: detected capacity change from 0 to 2097152 Apr 12 18:43:57.829682 systemd-nspawn[1213]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Apr 12 18:43:57.829682 systemd-nspawn[1213]: Press ^] three times within 1s to kill container. Apr 12 18:43:57.844935 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:43:57.933821 systemd[1]: Started oem-gce.service. Apr 12 18:43:57.942553 systemd[1]: Reached target multi-user.target. Apr 12 18:43:57.953169 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:43:57.966364 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:43:57.966606 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:43:57.976397 systemd[1]: Startup finished in 1.010s (kernel) + 21.966s (initrd) + 16.206s (userspace) = 39.183s. Apr 12 18:43:58.042345 systemd-nspawn[1213]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 12 18:43:58.042345 systemd-nspawn[1213]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 12 18:43:58.042610 systemd-nspawn[1213]: + /usr/bin/google_instance_setup Apr 12 18:43:58.588802 systemd[1]: Created slice system-sshd.slice. Apr 12 18:43:58.591071 systemd[1]: Started sshd@0-10.128.0.15:22-139.178.89.65:41792.service. Apr 12 18:43:58.754573 instance-setup[1219]: INFO Running google_set_multiqueue. Apr 12 18:43:58.772819 instance-setup[1219]: INFO Set channels for eth0 to 2. Apr 12 18:43:58.777092 instance-setup[1219]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 12 18:43:58.778643 instance-setup[1219]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 12 18:43:58.779234 instance-setup[1219]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 12 18:43:58.781037 instance-setup[1219]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 12 18:43:58.781641 instance-setup[1219]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 12 18:43:58.783090 instance-setup[1219]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 12 18:43:58.783517 instance-setup[1219]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 12 18:43:58.785138 instance-setup[1219]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 12 18:43:58.797408 instance-setup[1219]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 12 18:43:58.797816 instance-setup[1219]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 12 18:43:58.847228 systemd-nspawn[1213]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 12 18:43:58.964270 sshd[1223]: Accepted publickey for core from 139.178.89.65 port 41792 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:43:58.967426 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:43:58.988019 systemd[1]: Created slice user-500.slice. Apr 12 18:43:58.990062 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:43:59.006888 systemd-logind[1124]: New session 1 of user core. Apr 12 18:43:59.014127 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:43:59.016523 systemd[1]: Starting user@500.service... Apr 12 18:43:59.035268 (systemd)[1255]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:43:59.195311 systemd[1255]: Queued start job for default target default.target. Apr 12 18:43:59.197072 systemd[1255]: Reached target paths.target. Apr 12 18:43:59.197118 systemd[1255]: Reached target sockets.target. Apr 12 18:43:59.197142 systemd[1255]: Reached target timers.target. Apr 12 18:43:59.197164 systemd[1255]: Reached target basic.target. Apr 12 18:43:59.197335 systemd[1]: Started user@500.service. Apr 12 18:43:59.198824 systemd[1]: Started session-1.scope. Apr 12 18:43:59.201153 systemd[1255]: Reached target default.target. Apr 12 18:43:59.201485 systemd[1255]: Startup finished in 154ms. Apr 12 18:43:59.265023 startup-script[1253]: INFO Starting startup scripts. Apr 12 18:43:59.277881 startup-script[1253]: INFO No startup scripts found in metadata. Apr 12 18:43:59.278187 startup-script[1253]: INFO Finished running startup scripts. Apr 12 18:43:59.310762 systemd-nspawn[1213]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 12 18:43:59.310762 systemd-nspawn[1213]: + daemon_pids=() Apr 12 18:43:59.311095 systemd-nspawn[1213]: + for d in accounts clock_skew network Apr 12 18:43:59.311095 systemd-nspawn[1213]: + daemon_pids+=($!) Apr 12 18:43:59.311233 systemd-nspawn[1213]: + for d in accounts clock_skew network Apr 12 18:43:59.311377 systemd-nspawn[1213]: + daemon_pids+=($!) Apr 12 18:43:59.311471 systemd-nspawn[1213]: + for d in accounts clock_skew network Apr 12 18:43:59.311665 systemd-nspawn[1213]: + daemon_pids+=($!) Apr 12 18:43:59.311753 systemd-nspawn[1213]: + NOTIFY_SOCKET=/run/systemd/notify Apr 12 18:43:59.311753 systemd-nspawn[1213]: + /usr/bin/systemd-notify --ready Apr 12 18:43:59.312166 systemd-nspawn[1213]: + /usr/bin/google_accounts_daemon Apr 12 18:43:59.312579 systemd-nspawn[1213]: + /usr/bin/google_clock_skew_daemon Apr 12 18:43:59.312992 systemd-nspawn[1213]: + /usr/bin/google_network_daemon Apr 12 18:43:59.392969 systemd-nspawn[1213]: + wait -n 36 37 38 Apr 12 18:43:59.459451 systemd[1]: Started sshd@1-10.128.0.15:22-139.178.89.65:41808.service. Apr 12 18:43:59.827085 sshd[1270]: Accepted publickey for core from 139.178.89.65 port 41808 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:43:59.828686 sshd[1270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:43:59.837082 systemd[1]: Started session-2.scope. Apr 12 18:43:59.838405 systemd-logind[1124]: New session 2 of user core. Apr 12 18:44:00.087841 sshd[1270]: pam_unix(sshd:session): session closed for user core Apr 12 18:44:00.092064 systemd[1]: sshd@1-10.128.0.15:22-139.178.89.65:41808.service: Deactivated successfully. Apr 12 18:44:00.093252 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:44:00.094167 systemd-logind[1124]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:44:00.095634 systemd-logind[1124]: Removed session 2. Apr 12 18:44:00.128891 google-networking[1266]: INFO Starting Google Networking daemon. Apr 12 18:44:00.140556 systemd[1]: Started sshd@2-10.128.0.15:22-139.178.89.65:41822.service. Apr 12 18:44:00.178866 groupadd[1283]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 12 18:44:00.182926 groupadd[1283]: group added to /etc/gshadow: name=google-sudoers Apr 12 18:44:00.187654 groupadd[1283]: new group: name=google-sudoers, GID=1000 Apr 12 18:44:00.196411 google-clock-skew[1265]: INFO Starting Google Clock Skew daemon. Apr 12 18:44:00.203862 google-accounts[1264]: INFO Starting Google Accounts daemon. Apr 12 18:44:00.212259 google-clock-skew[1265]: INFO Clock drift token has changed: 0. Apr 12 18:44:00.216840 systemd-nspawn[1213]: hwclock: Cannot access the Hardware Clock via any known method. Apr 12 18:44:00.216840 systemd-nspawn[1213]: hwclock: Use the --verbose option to see the details of our search for an access method. Apr 12 18:44:00.217827 google-clock-skew[1265]: WARNING Failed to sync system time with hardware clock. Apr 12 18:44:00.233849 google-accounts[1264]: WARNING OS Login not installed. Apr 12 18:44:00.235042 google-accounts[1264]: INFO Creating a new user account for 0. Apr 12 18:44:00.240291 systemd-nspawn[1213]: useradd: invalid user name '0': use --badname to ignore Apr 12 18:44:00.240953 google-accounts[1264]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 12 18:44:00.492109 sshd[1282]: Accepted publickey for core from 139.178.89.65 port 41822 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:44:00.493227 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:44:00.499384 systemd[1]: Started session-3.scope. Apr 12 18:44:00.500084 systemd-logind[1124]: New session 3 of user core. Apr 12 18:44:00.735362 sshd[1282]: pam_unix(sshd:session): session closed for user core Apr 12 18:44:00.739575 systemd[1]: sshd@2-10.128.0.15:22-139.178.89.65:41822.service: Deactivated successfully. Apr 12 18:44:00.740656 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:44:00.741591 systemd-logind[1124]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:44:00.743073 systemd-logind[1124]: Removed session 3. Apr 12 18:44:00.789093 systemd[1]: Started sshd@3-10.128.0.15:22-139.178.89.65:41828.service. Apr 12 18:44:01.127487 sshd[1300]: Accepted publickey for core from 139.178.89.65 port 41828 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:44:01.129465 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:44:01.135986 systemd[1]: Started session-4.scope. Apr 12 18:44:01.136946 systemd-logind[1124]: New session 4 of user core. Apr 12 18:44:01.375187 sshd[1300]: pam_unix(sshd:session): session closed for user core Apr 12 18:44:01.379371 systemd[1]: sshd@3-10.128.0.15:22-139.178.89.65:41828.service: Deactivated successfully. Apr 12 18:44:01.380474 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:44:01.381338 systemd-logind[1124]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:44:01.382574 systemd-logind[1124]: Removed session 4. Apr 12 18:44:01.430671 systemd[1]: Started sshd@4-10.128.0.15:22-139.178.89.65:41840.service. Apr 12 18:44:01.778015 sshd[1306]: Accepted publickey for core from 139.178.89.65 port 41840 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:44:01.779720 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:44:01.787424 systemd[1]: Started session-5.scope. Apr 12 18:44:01.788116 systemd-logind[1124]: New session 5 of user core. Apr 12 18:44:02.006830 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:44:02.007278 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:44:02.816100 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:44:02.825676 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:44:02.826244 systemd[1]: Reached target network-online.target. Apr 12 18:44:02.828442 systemd[1]: Starting docker.service... Apr 12 18:44:02.877313 env[1325]: time="2024-04-12T18:44:02.877234115Z" level=info msg="Starting up" Apr 12 18:44:02.878979 env[1325]: time="2024-04-12T18:44:02.878938526Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:44:02.879150 env[1325]: time="2024-04-12T18:44:02.879131616Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:44:02.879240 env[1325]: time="2024-04-12T18:44:02.879222304Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:44:02.879306 env[1325]: time="2024-04-12T18:44:02.879292616Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:44:02.881395 env[1325]: time="2024-04-12T18:44:02.881335920Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:44:02.881395 env[1325]: time="2024-04-12T18:44:02.881363006Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:44:02.881395 env[1325]: time="2024-04-12T18:44:02.881386987Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:44:02.881395 env[1325]: time="2024-04-12T18:44:02.881401102Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:44:02.889956 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport351222115-merged.mount: Deactivated successfully. Apr 12 18:44:02.951780 env[1325]: time="2024-04-12T18:44:02.951724332Z" level=info msg="Loading containers: start." Apr 12 18:44:03.126930 kernel: Initializing XFRM netlink socket Apr 12 18:44:03.172704 env[1325]: time="2024-04-12T18:44:03.172619043Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:44:03.260275 systemd-networkd[1023]: docker0: Link UP Apr 12 18:44:03.276739 env[1325]: time="2024-04-12T18:44:03.276676325Z" level=info msg="Loading containers: done." Apr 12 18:44:03.298796 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1206314205-merged.mount: Deactivated successfully. Apr 12 18:44:03.301338 env[1325]: time="2024-04-12T18:44:03.301283939Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:44:03.301602 env[1325]: time="2024-04-12T18:44:03.301562809Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:44:03.301756 env[1325]: time="2024-04-12T18:44:03.301712784Z" level=info msg="Daemon has completed initialization" Apr 12 18:44:03.327518 systemd[1]: Started docker.service. Apr 12 18:44:03.342443 env[1325]: time="2024-04-12T18:44:03.342138955Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:44:03.371390 systemd[1]: Reloading. Apr 12 18:44:03.485397 /usr/lib/systemd/system-generators/torcx-generator[1464]: time="2024-04-12T18:44:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:44:03.486994 /usr/lib/systemd/system-generators/torcx-generator[1464]: time="2024-04-12T18:44:03Z" level=info msg="torcx already run" Apr 12 18:44:03.580693 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:44:03.580726 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:44:03.606625 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:44:03.746485 systemd[1]: Started kubelet.service. Apr 12 18:44:03.837405 kubelet[1506]: E0412 18:44:03.837345 1506 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:44:03.839758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:44:03.840011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:44:04.422461 env[1143]: time="2024-04-12T18:44:04.422094800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\"" Apr 12 18:44:04.959695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091349473.mount: Deactivated successfully. Apr 12 18:44:07.087984 env[1143]: time="2024-04-12T18:44:07.087897452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:07.091406 env[1143]: time="2024-04-12T18:44:07.091355856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:07.093968 env[1143]: time="2024-04-12T18:44:07.093921415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:07.097249 env[1143]: time="2024-04-12T18:44:07.097168542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cf0c29f585316888225cf254949988bdbedc7ba6238bc9a24bf6f0c508c42b6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:07.098505 env[1143]: time="2024-04-12T18:44:07.098445523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.12\" returns image reference \"sha256:394383b7bc9634d67978b735802d4039f702efd9e5cc2499eac1a8ad78184809\"" Apr 12 18:44:07.112988 env[1143]: time="2024-04-12T18:44:07.112937707Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\"" Apr 12 18:44:09.223042 env[1143]: time="2024-04-12T18:44:09.222971704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:09.226192 env[1143]: time="2024-04-12T18:44:09.226139845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:09.228812 env[1143]: time="2024-04-12T18:44:09.228759440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:09.231290 env[1143]: time="2024-04-12T18:44:09.231251381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6caa3a4278e87169371d031861e49db21742bcbd8df650d7fe519a1a7f6764af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:09.232240 env[1143]: time="2024-04-12T18:44:09.232185966Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.12\" returns image reference \"sha256:b68567f81c92edc7c53449e3958d8cf5ad474ac00bbbdfcd2bd47558a9bba5d7\"" Apr 12 18:44:09.246840 env[1143]: time="2024-04-12T18:44:09.246793173Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\"" Apr 12 18:44:10.639448 env[1143]: time="2024-04-12T18:44:10.639382611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:10.643474 env[1143]: time="2024-04-12T18:44:10.643403261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:10.646361 env[1143]: time="2024-04-12T18:44:10.646296954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:10.648975 env[1143]: time="2024-04-12T18:44:10.648929789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:b8bb7b17a4f915419575ceb885e128d0bb5ea8e67cb88dbde257988b770a4dce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:10.651492 env[1143]: time="2024-04-12T18:44:10.651442023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.12\" returns image reference \"sha256:5fab684ed62aaef7130a9e5533c28699a5be380abc7cdbcd32502cca8b56e833\"" Apr 12 18:44:10.667927 env[1143]: time="2024-04-12T18:44:10.667853160Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\"" Apr 12 18:44:11.871539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2543739252.mount: Deactivated successfully. Apr 12 18:44:12.520541 env[1143]: time="2024-04-12T18:44:12.520465916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.523328 env[1143]: time="2024-04-12T18:44:12.523276047Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.525481 env[1143]: time="2024-04-12T18:44:12.525438628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.527703 env[1143]: time="2024-04-12T18:44:12.527661853Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b0539f35b586abc54ca7660f9bb8a539d010b9e07d20e9e3d529cf0ca35d4ddf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.528463 env[1143]: time="2024-04-12T18:44:12.528411954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.12\" returns image reference \"sha256:2b5590cbba38a0f4f32cbe39a2d3a1a1348612e7550f8b68af937ba5b6e9ba3d\"" Apr 12 18:44:12.543719 env[1143]: time="2024-04-12T18:44:12.543642267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:44:12.924262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694804021.mount: Deactivated successfully. Apr 12 18:44:12.934261 env[1143]: time="2024-04-12T18:44:12.934179772Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.937020 env[1143]: time="2024-04-12T18:44:12.936959745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.939220 env[1143]: time="2024-04-12T18:44:12.939173808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.941791 env[1143]: time="2024-04-12T18:44:12.941747146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:12.942562 env[1143]: time="2024-04-12T18:44:12.942508479Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:44:12.957171 env[1143]: time="2024-04-12T18:44:12.957127550Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Apr 12 18:44:13.668730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3976171236.mount: Deactivated successfully. Apr 12 18:44:14.078974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:44:14.079323 systemd[1]: Stopped kubelet.service. Apr 12 18:44:14.081801 systemd[1]: Started kubelet.service. Apr 12 18:44:14.168370 kubelet[1551]: E0412 18:44:14.168298 1551 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Apr 12 18:44:14.173948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:44:14.174169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:44:18.290101 env[1143]: time="2024-04-12T18:44:18.290017925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:18.293232 env[1143]: time="2024-04-12T18:44:18.293177226Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:18.295868 env[1143]: time="2024-04-12T18:44:18.295820925Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:18.298531 env[1143]: time="2024-04-12T18:44:18.298490359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:18.299410 env[1143]: time="2024-04-12T18:44:18.299354548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Apr 12 18:44:18.313713 env[1143]: time="2024-04-12T18:44:18.313643068Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Apr 12 18:44:18.710854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060123041.mount: Deactivated successfully. Apr 12 18:44:19.479430 env[1143]: time="2024-04-12T18:44:19.479353960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:19.482928 env[1143]: time="2024-04-12T18:44:19.482865597Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:19.490477 env[1143]: time="2024-04-12T18:44:19.490410382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:19.491344 env[1143]: time="2024-04-12T18:44:19.491300966Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:19.492790 env[1143]: time="2024-04-12T18:44:19.492749394Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Apr 12 18:44:19.805719 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 12 18:44:22.875207 systemd[1]: Stopped kubelet.service. Apr 12 18:44:22.897523 systemd[1]: Reloading. Apr 12 18:44:23.014850 /usr/lib/systemd/system-generators/torcx-generator[1651]: time="2024-04-12T18:44:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:44:23.015472 /usr/lib/systemd/system-generators/torcx-generator[1651]: time="2024-04-12T18:44:23Z" level=info msg="torcx already run" Apr 12 18:44:23.100962 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:44:23.100991 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:44:23.124802 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:44:23.250247 systemd[1]: Started kubelet.service. Apr 12 18:44:23.314656 kubelet[1692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:44:23.315140 kubelet[1692]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:44:23.315140 kubelet[1692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:44:23.315294 kubelet[1692]: I0412 18:44:23.315213 1692 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:44:23.706359 kubelet[1692]: I0412 18:44:23.706301 1692 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:44:23.706359 kubelet[1692]: I0412 18:44:23.706342 1692 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:44:23.706693 kubelet[1692]: I0412 18:44:23.706652 1692 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:44:23.712264 kubelet[1692]: E0412 18:44:23.712229 1692 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.712563 kubelet[1692]: I0412 18:44:23.712540 1692 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:44:23.717860 kubelet[1692]: I0412 18:44:23.717814 1692 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:44:23.718260 kubelet[1692]: I0412 18:44:23.718220 1692 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:44:23.718369 kubelet[1692]: I0412 18:44:23.718335 1692 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:44:23.718565 kubelet[1692]: I0412 18:44:23.718373 1692 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:44:23.718565 kubelet[1692]: I0412 18:44:23.718398 1692 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:44:23.718565 kubelet[1692]: I0412 18:44:23.718544 1692 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:44:23.722162 kubelet[1692]: I0412 18:44:23.722106 1692 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:44:23.722162 kubelet[1692]: I0412 18:44:23.722139 1692 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:44:23.722162 kubelet[1692]: I0412 18:44:23.722166 1692 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:44:23.722413 kubelet[1692]: I0412 18:44:23.722187 1692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:44:23.726545 kubelet[1692]: W0412 18:44:23.726466 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.726545 kubelet[1692]: E0412 18:44:23.726528 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.727055 kubelet[1692]: I0412 18:44:23.727016 1692 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:44:23.730145 kubelet[1692]: W0412 18:44:23.730087 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.730329 kubelet[1692]: E0412 18:44:23.730300 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.735416 kubelet[1692]: W0412 18:44:23.735387 1692 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:44:23.736235 kubelet[1692]: I0412 18:44:23.736213 1692 server.go:1168] "Started kubelet" Apr 12 18:44:23.736503 kubelet[1692]: I0412 18:44:23.736480 1692 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:44:23.736886 kubelet[1692]: I0412 18:44:23.736866 1692 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:44:23.737505 kubelet[1692]: I0412 18:44:23.737440 1692 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:44:23.737863 kubelet[1692]: E0412 18:44:23.737723 1692 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal.17c59ca486e6c492", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal", UID:"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.April, 12, 18, 44, 23, 736181906, time.Local), LastTimestamp:time.Date(2024, time.April, 12, 18, 44, 23, 736181906, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.128.0.15:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.15:6443: connect: connection refused'(may retry after sleeping) Apr 12 18:44:23.738981 kubelet[1692]: E0412 18:44:23.738960 1692 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:44:23.739150 kubelet[1692]: E0412 18:44:23.739132 1692 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:44:23.750222 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:44:23.750435 kubelet[1692]: I0412 18:44:23.750408 1692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:44:23.754443 kubelet[1692]: I0412 18:44:23.754414 1692 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:44:23.754772 kubelet[1692]: I0412 18:44:23.754751 1692 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:44:23.755491 kubelet[1692]: W0412 18:44:23.755431 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.755607 kubelet[1692]: E0412 18:44:23.755499 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.756424 kubelet[1692]: E0412 18:44:23.756401 1692 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.15:6443: connect: connection refused" interval="200ms" Apr 12 18:44:23.784318 kubelet[1692]: I0412 18:44:23.784280 1692 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:44:23.787856 kubelet[1692]: I0412 18:44:23.787824 1692 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:44:23.788119 kubelet[1692]: I0412 18:44:23.788090 1692 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:44:23.788256 kubelet[1692]: I0412 18:44:23.788241 1692 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:44:23.788439 kubelet[1692]: E0412 18:44:23.788424 1692 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:44:23.800547 kubelet[1692]: W0412 18:44:23.800481 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.800547 kubelet[1692]: E0412 18:44:23.800525 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:23.808494 kubelet[1692]: I0412 18:44:23.808459 1692 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:44:23.808695 kubelet[1692]: I0412 18:44:23.808597 1692 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:44:23.808695 kubelet[1692]: I0412 18:44:23.808626 1692 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:44:23.811978 kubelet[1692]: I0412 18:44:23.811930 1692 policy_none.go:49] "None policy: Start" Apr 12 18:44:23.812805 kubelet[1692]: I0412 18:44:23.812755 1692 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:44:23.812805 kubelet[1692]: I0412 18:44:23.812806 1692 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:44:23.820548 systemd[1]: Created slice kubepods.slice. Apr 12 18:44:23.827452 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:44:23.831618 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:44:23.839294 kubelet[1692]: I0412 18:44:23.839007 1692 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:44:23.839603 kubelet[1692]: I0412 18:44:23.839584 1692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:44:23.841422 kubelet[1692]: E0412 18:44:23.841371 1692 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" not found" Apr 12 18:44:23.862074 kubelet[1692]: I0412 18:44:23.862041 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:23.862521 kubelet[1692]: E0412 18:44:23.862490 1692 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.15:6443/api/v1/nodes\": dial tcp 10.128.0.15:6443: connect: connection refused" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:23.888729 kubelet[1692]: I0412 18:44:23.888662 1692 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:23.895091 kubelet[1692]: I0412 18:44:23.895035 1692 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:23.900589 kubelet[1692]: I0412 18:44:23.900556 1692 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:23.907359 systemd[1]: Created slice kubepods-burstable-podde7aa386a8cb8fce85552433f3dc239c.slice. Apr 12 18:44:23.922571 systemd[1]: Created slice kubepods-burstable-podf8b08ad68adca2e39f6282764ad63d8b.slice. Apr 12 18:44:23.933696 systemd[1]: Created slice kubepods-burstable-pod85098b4f417d20cfda1c81c8ce1a9802.slice. Apr 12 18:44:23.960173 kubelet[1692]: E0412 18:44:23.960034 1692 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.15:6443: connect: connection refused" interval="400ms" Apr 12 18:44:24.055861 kubelet[1692]: I0412 18:44:24.055782 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.055861 kubelet[1692]: I0412 18:44:24.055854 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056220 kubelet[1692]: I0412 18:44:24.055892 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056220 kubelet[1692]: I0412 18:44:24.055941 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-ca-certs\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056220 kubelet[1692]: I0412 18:44:24.055989 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-k8s-certs\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056220 kubelet[1692]: I0412 18:44:24.056033 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056421 kubelet[1692]: I0412 18:44:24.056069 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056421 kubelet[1692]: I0412 18:44:24.056103 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85098b4f417d20cfda1c81c8ce1a9802-kubeconfig\") pod \"kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"85098b4f417d20cfda1c81c8ce1a9802\") " pod="kube-system/kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.056421 kubelet[1692]: I0412 18:44:24.056141 1692 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.076145 kubelet[1692]: I0412 18:44:24.076105 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.076934 kubelet[1692]: E0412 18:44:24.076865 1692 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.15:6443/api/v1/nodes\": dial tcp 10.128.0.15:6443: connect: connection refused" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.220831 env[1143]: time="2024-04-12T18:44:24.220048362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:de7aa386a8cb8fce85552433f3dc239c,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:24.230053 env[1143]: time="2024-04-12T18:44:24.229998360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:f8b08ad68adca2e39f6282764ad63d8b,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:24.237391 env[1143]: time="2024-04-12T18:44:24.237320385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:85098b4f417d20cfda1c81c8ce1a9802,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:24.361419 kubelet[1692]: E0412 18:44:24.361357 1692 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.15:6443: connect: connection refused" interval="800ms" Apr 12 18:44:24.483966 kubelet[1692]: I0412 18:44:24.483816 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.484298 kubelet[1692]: E0412 18:44:24.484268 1692 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.15:6443/api/v1/nodes\": dial tcp 10.128.0.15:6443: connect: connection refused" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:24.570398 kubelet[1692]: W0412 18:44:24.570335 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:24.570398 kubelet[1692]: E0412 18:44:24.570385 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:24.829145 kubelet[1692]: W0412 18:44:24.829094 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:24.829145 kubelet[1692]: E0412 18:44:24.829146 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:24.899317 kubelet[1692]: W0412 18:44:24.899268 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:24.899317 kubelet[1692]: E0412 18:44:24.899323 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:25.162617 kubelet[1692]: E0412 18:44:25.162494 1692 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.15:6443: connect: connection refused" interval="1.6s" Apr 12 18:44:25.235602 kubelet[1692]: W0412 18:44:25.235519 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:25.235602 kubelet[1692]: E0412 18:44:25.235603 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:25.293275 kubelet[1692]: I0412 18:44:25.292963 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:25.293499 kubelet[1692]: E0412 18:44:25.293388 1692 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.15:6443/api/v1/nodes\": dial tcp 10.128.0.15:6443: connect: connection refused" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:25.829581 kubelet[1692]: E0412 18:44:25.829539 1692 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:26.763494 kubelet[1692]: E0412 18:44:26.763444 1692 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.15:6443: connect: connection refused" interval="3.2s" Apr 12 18:44:26.899181 kubelet[1692]: I0412 18:44:26.899138 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:26.900257 kubelet[1692]: E0412 18:44:26.899589 1692 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.15:6443/api/v1/nodes\": dial tcp 10.128.0.15:6443: connect: connection refused" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:27.058095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631067083.mount: Deactivated successfully. Apr 12 18:44:27.065057 env[1143]: time="2024-04-12T18:44:27.064997902Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.067458 env[1143]: time="2024-04-12T18:44:27.067406587Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.071619 env[1143]: time="2024-04-12T18:44:27.071567553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.073281 env[1143]: time="2024-04-12T18:44:27.073229623Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.075838 env[1143]: time="2024-04-12T18:44:27.075765583Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.078048 env[1143]: time="2024-04-12T18:44:27.077997865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.079274 env[1143]: time="2024-04-12T18:44:27.079221360Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.087725 env[1143]: time="2024-04-12T18:44:27.087661623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.088894 env[1143]: time="2024-04-12T18:44:27.088850939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.090337 env[1143]: time="2024-04-12T18:44:27.090295981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.093086 env[1143]: time="2024-04-12T18:44:27.093035484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.104378 kubelet[1692]: W0412 18:44:27.104318 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.104378 kubelet[1692]: E0412 18:44:27.104370 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.106124 env[1143]: time="2024-04-12T18:44:27.106061786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:44:27.139444 env[1143]: time="2024-04-12T18:44:27.139356573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:27.139749 env[1143]: time="2024-04-12T18:44:27.139705803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:27.139960 env[1143]: time="2024-04-12T18:44:27.139882233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:27.140375 env[1143]: time="2024-04-12T18:44:27.140326915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf59fea504204db643a75b942c9351f1bbeb2ab8a58a850fc587f2f918fa0868 pid=1738 runtime=io.containerd.runc.v2 Apr 12 18:44:27.153734 env[1143]: time="2024-04-12T18:44:27.153626990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:27.153734 env[1143]: time="2024-04-12T18:44:27.153684671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:27.153734 env[1143]: time="2024-04-12T18:44:27.153703669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:27.154457 env[1143]: time="2024-04-12T18:44:27.154374439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70b18412706893f327e7cc5d96a8facab30a455912ab95ccb5bca950c5baf356 pid=1744 runtime=io.containerd.runc.v2 Apr 12 18:44:27.180765 env[1143]: time="2024-04-12T18:44:27.180638679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:27.181074 env[1143]: time="2024-04-12T18:44:27.181019871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:27.181281 env[1143]: time="2024-04-12T18:44:27.181236244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:27.181967 env[1143]: time="2024-04-12T18:44:27.181881261Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87d2b7ffe82a6bcff4778b709e7c252e534854e0a92333c729db14b4d74f6021 pid=1773 runtime=io.containerd.runc.v2 Apr 12 18:44:27.183959 systemd[1]: Started cri-containerd-cf59fea504204db643a75b942c9351f1bbeb2ab8a58a850fc587f2f918fa0868.scope. Apr 12 18:44:27.216847 systemd[1]: Started cri-containerd-70b18412706893f327e7cc5d96a8facab30a455912ab95ccb5bca950c5baf356.scope. Apr 12 18:44:27.237969 kubelet[1692]: W0412 18:44:27.237629 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.237969 kubelet[1692]: E0412 18:44:27.237694 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.240475 systemd[1]: Started cri-containerd-87d2b7ffe82a6bcff4778b709e7c252e534854e0a92333c729db14b4d74f6021.scope. Apr 12 18:44:27.303693 kubelet[1692]: W0412 18:44:27.303649 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.303693 kubelet[1692]: E0412 18:44:27.303708 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.309815 env[1143]: time="2024-04-12T18:44:27.308308695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:f8b08ad68adca2e39f6282764ad63d8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"70b18412706893f327e7cc5d96a8facab30a455912ab95ccb5bca950c5baf356\"" Apr 12 18:44:27.326310 env[1143]: time="2024-04-12T18:44:27.326249357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:de7aa386a8cb8fce85552433f3dc239c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf59fea504204db643a75b942c9351f1bbeb2ab8a58a850fc587f2f918fa0868\"" Apr 12 18:44:27.334941 kubelet[1692]: E0412 18:44:27.332165 1692 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-21291" Apr 12 18:44:27.334941 kubelet[1692]: E0412 18:44:27.333006 1692 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flat" Apr 12 18:44:27.338106 env[1143]: time="2024-04-12T18:44:27.338053576Z" level=info msg="CreateContainer within sandbox \"cf59fea504204db643a75b942c9351f1bbeb2ab8a58a850fc587f2f918fa0868\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:44:27.342572 env[1143]: time="2024-04-12T18:44:27.342522952Z" level=info msg="CreateContainer within sandbox \"70b18412706893f327e7cc5d96a8facab30a455912ab95ccb5bca950c5baf356\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:44:27.375287 env[1143]: time="2024-04-12T18:44:27.375178702Z" level=info msg="CreateContainer within sandbox \"70b18412706893f327e7cc5d96a8facab30a455912ab95ccb5bca950c5baf356\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7a7ab5c7b222343ebe84f18ad125378ba9c7d5815c1873a49edc52ce98aeb98d\"" Apr 12 18:44:27.376463 env[1143]: time="2024-04-12T18:44:27.376389795Z" level=info msg="StartContainer for \"7a7ab5c7b222343ebe84f18ad125378ba9c7d5815c1873a49edc52ce98aeb98d\"" Apr 12 18:44:27.376987 env[1143]: time="2024-04-12T18:44:27.376948616Z" level=info msg="CreateContainer within sandbox \"cf59fea504204db643a75b942c9351f1bbeb2ab8a58a850fc587f2f918fa0868\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"70144b5b9d3e0928004c09071dd8a52d1723ba656470a820dcf0f5de89d2aea2\"" Apr 12 18:44:27.377648 env[1143]: time="2024-04-12T18:44:27.377616129Z" level=info msg="StartContainer for \"70144b5b9d3e0928004c09071dd8a52d1723ba656470a820dcf0f5de89d2aea2\"" Apr 12 18:44:27.383239 env[1143]: time="2024-04-12T18:44:27.383197155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal,Uid:85098b4f417d20cfda1c81c8ce1a9802,Namespace:kube-system,Attempt:0,} returns sandbox id \"87d2b7ffe82a6bcff4778b709e7c252e534854e0a92333c729db14b4d74f6021\"" Apr 12 18:44:27.385150 kubelet[1692]: E0412 18:44:27.385121 1692 kubelet_pods.go:414] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-21291" Apr 12 18:44:27.386763 env[1143]: time="2024-04-12T18:44:27.386716094Z" level=info msg="CreateContainer within sandbox \"87d2b7ffe82a6bcff4778b709e7c252e534854e0a92333c729db14b4d74f6021\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:44:27.410740 env[1143]: time="2024-04-12T18:44:27.410683399Z" level=info msg="CreateContainer within sandbox \"87d2b7ffe82a6bcff4778b709e7c252e534854e0a92333c729db14b4d74f6021\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"87fe0fb670ee237db5ebd7e9e486dccdd0205258f66bb66c03edc7ae3c812d35\"" Apr 12 18:44:27.411811 env[1143]: time="2024-04-12T18:44:27.411770046Z" level=info msg="StartContainer for \"87fe0fb670ee237db5ebd7e9e486dccdd0205258f66bb66c03edc7ae3c812d35\"" Apr 12 18:44:27.415458 systemd[1]: Started cri-containerd-7a7ab5c7b222343ebe84f18ad125378ba9c7d5815c1873a49edc52ce98aeb98d.scope. Apr 12 18:44:27.433117 systemd[1]: Started cri-containerd-70144b5b9d3e0928004c09071dd8a52d1723ba656470a820dcf0f5de89d2aea2.scope. Apr 12 18:44:27.489411 systemd[1]: Started cri-containerd-87fe0fb670ee237db5ebd7e9e486dccdd0205258f66bb66c03edc7ae3c812d35.scope. Apr 12 18:44:27.539493 env[1143]: time="2024-04-12T18:44:27.539432744Z" level=info msg="StartContainer for \"7a7ab5c7b222343ebe84f18ad125378ba9c7d5815c1873a49edc52ce98aeb98d\" returns successfully" Apr 12 18:44:27.549006 env[1143]: time="2024-04-12T18:44:27.548954243Z" level=info msg="StartContainer for \"70144b5b9d3e0928004c09071dd8a52d1723ba656470a820dcf0f5de89d2aea2\" returns successfully" Apr 12 18:44:27.613602 env[1143]: time="2024-04-12T18:44:27.613539167Z" level=info msg="StartContainer for \"87fe0fb670ee237db5ebd7e9e486dccdd0205258f66bb66c03edc7ae3c812d35\" returns successfully" Apr 12 18:44:27.643574 kubelet[1692]: W0412 18:44:27.643399 1692 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:27.643574 kubelet[1692]: E0412 18:44:27.643506 1692 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.15:6443: connect: connection refused Apr 12 18:44:30.105365 kubelet[1692]: I0412 18:44:30.105329 1692 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:32.485767 kubelet[1692]: E0412 18:44:32.485726 1692 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" not found" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:32.573454 kubelet[1692]: I0412 18:44:32.573399 1692 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:32.731496 kubelet[1692]: I0412 18:44:32.731455 1692 apiserver.go:52] "Watching apiserver" Apr 12 18:44:32.755636 kubelet[1692]: I0412 18:44:32.755484 1692 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:44:32.807971 kubelet[1692]: I0412 18:44:32.807934 1692 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:44:34.800212 update_engine[1126]: I0412 18:44:34.800155 1126 update_attempter.cc:509] Updating boot flags... Apr 12 18:44:35.375767 systemd[1]: Reloading. Apr 12 18:44:35.501793 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2024-04-12T18:44:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:44:35.502403 /usr/lib/systemd/system-generators/torcx-generator[1993]: time="2024-04-12T18:44:35Z" level=info msg="torcx already run" Apr 12 18:44:35.609586 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:44:35.609614 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:44:35.637053 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:44:35.829095 kubelet[1692]: I0412 18:44:35.829047 1692 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:44:35.830113 systemd[1]: Stopping kubelet.service... Apr 12 18:44:35.842381 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:44:35.842679 systemd[1]: Stopped kubelet.service. Apr 12 18:44:35.846615 systemd[1]: Started kubelet.service. Apr 12 18:44:35.970199 kubelet[2038]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:44:35.970708 kubelet[2038]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:44:35.970855 kubelet[2038]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:44:35.971148 kubelet[2038]: I0412 18:44:35.971063 2038 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:44:35.996753 kubelet[2038]: I0412 18:44:35.996699 2038 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Apr 12 18:44:35.997011 kubelet[2038]: I0412 18:44:35.996988 2038 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:44:35.997477 kubelet[2038]: I0412 18:44:35.997453 2038 server.go:837] "Client rotation is on, will bootstrap in background" Apr 12 18:44:36.001312 kubelet[2038]: I0412 18:44:36.001282 2038 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:44:36.005086 kubelet[2038]: I0412 18:44:36.005049 2038 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:44:36.009162 sudo[2049]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:44:36.009587 sudo[2049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:44:36.011250 kubelet[2038]: I0412 18:44:36.011218 2038 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:44:36.012171 kubelet[2038]: I0412 18:44:36.012150 2038 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:44:36.012533 kubelet[2038]: I0412 18:44:36.012513 2038 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Apr 12 18:44:36.012970 kubelet[2038]: I0412 18:44:36.012949 2038 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Apr 12 18:44:36.013099 kubelet[2038]: I0412 18:44:36.013085 2038 container_manager_linux.go:302] "Creating device plugin manager" Apr 12 18:44:36.013236 kubelet[2038]: I0412 18:44:36.013221 2038 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:44:36.026952 kubelet[2038]: I0412 18:44:36.022462 2038 kubelet.go:405] "Attempting to sync node with API server" Apr 12 18:44:36.026952 kubelet[2038]: I0412 18:44:36.022497 2038 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:44:36.026952 kubelet[2038]: I0412 18:44:36.022533 2038 kubelet.go:309] "Adding apiserver pod source" Apr 12 18:44:36.026952 kubelet[2038]: I0412 18:44:36.022558 2038 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:44:36.027414 kubelet[2038]: I0412 18:44:36.027146 2038 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:44:36.037014 kubelet[2038]: I0412 18:44:36.036984 2038 server.go:1168] "Started kubelet" Apr 12 18:44:36.047981 kubelet[2038]: I0412 18:44:36.047943 2038 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:44:36.074490 kubelet[2038]: I0412 18:44:36.074426 2038 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:44:36.078222 kubelet[2038]: I0412 18:44:36.078183 2038 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:44:36.080486 kubelet[2038]: I0412 18:44:36.080446 2038 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Apr 12 18:44:36.086334 kubelet[2038]: I0412 18:44:36.086272 2038 volume_manager.go:284] "Starting Kubelet Volume Manager" Apr 12 18:44:36.087737 kubelet[2038]: E0412 18:44:36.087700 2038 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Apr 12 18:44:36.087737 kubelet[2038]: E0412 18:44:36.087745 2038 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:44:36.089282 kubelet[2038]: I0412 18:44:36.089249 2038 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Apr 12 18:44:36.161792 kubelet[2038]: I0412 18:44:36.161752 2038 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Apr 12 18:44:36.163519 kubelet[2038]: I0412 18:44:36.163484 2038 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Apr 12 18:44:36.163519 kubelet[2038]: I0412 18:44:36.163519 2038 status_manager.go:207] "Starting to sync pod status with apiserver" Apr 12 18:44:36.163732 kubelet[2038]: I0412 18:44:36.163558 2038 kubelet.go:2257] "Starting kubelet main sync loop" Apr 12 18:44:36.163732 kubelet[2038]: E0412 18:44:36.163632 2038 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:44:36.204792 kubelet[2038]: I0412 18:44:36.204754 2038 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.224492 kubelet[2038]: I0412 18:44:36.224379 2038 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.224492 kubelet[2038]: I0412 18:44:36.224486 2038 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.263824 kubelet[2038]: E0412 18:44:36.263788 2038 kubelet.go:2281] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:44:36.312070 kubelet[2038]: I0412 18:44:36.312038 2038 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:44:36.312403 kubelet[2038]: I0412 18:44:36.312385 2038 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:44:36.312687 kubelet[2038]: I0412 18:44:36.312575 2038 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:44:36.313037 kubelet[2038]: I0412 18:44:36.313017 2038 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:44:36.313174 kubelet[2038]: I0412 18:44:36.313153 2038 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Apr 12 18:44:36.313279 kubelet[2038]: I0412 18:44:36.313265 2038 policy_none.go:49] "None policy: Start" Apr 12 18:44:36.314507 kubelet[2038]: I0412 18:44:36.314486 2038 memory_manager.go:169] "Starting memorymanager" policy="None" Apr 12 18:44:36.314648 kubelet[2038]: I0412 18:44:36.314636 2038 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:44:36.326047 kubelet[2038]: I0412 18:44:36.326006 2038 state_mem.go:75] "Updated machine memory state" Apr 12 18:44:36.343577 kubelet[2038]: I0412 18:44:36.343356 2038 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:44:36.348040 kubelet[2038]: I0412 18:44:36.344637 2038 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:44:36.465035 kubelet[2038]: I0412 18:44:36.464998 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:36.465361 kubelet[2038]: I0412 18:44:36.465342 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:36.465511 kubelet[2038]: I0412 18:44:36.465497 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:36.478455 kubelet[2038]: W0412 18:44:36.478323 2038 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 18:44:36.480215 kubelet[2038]: W0412 18:44:36.480172 2038 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 18:44:36.482549 kubelet[2038]: W0412 18:44:36.482508 2038 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Apr 12 18:44:36.494652 kubelet[2038]: I0412 18:44:36.494619 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-k8s-certs\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.494963 kubelet[2038]: I0412 18:44:36.494946 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495142 kubelet[2038]: I0412 18:44:36.495118 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495247 kubelet[2038]: I0412 18:44:36.495185 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495315 kubelet[2038]: I0412 18:44:36.495252 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de7aa386a8cb8fce85552433f3dc239c-ca-certs\") pod \"kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"de7aa386a8cb8fce85552433f3dc239c\") " pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495315 kubelet[2038]: I0412 18:44:36.495292 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-ca-certs\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495438 kubelet[2038]: I0412 18:44:36.495353 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495438 kubelet[2038]: I0412 18:44:36.495412 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f8b08ad68adca2e39f6282764ad63d8b-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"f8b08ad68adca2e39f6282764ad63d8b\") " pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.495554 kubelet[2038]: I0412 18:44:36.495463 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/85098b4f417d20cfda1c81c8ce1a9802-kubeconfig\") pod \"kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" (UID: \"85098b4f417d20cfda1c81c8ce1a9802\") " pod="kube-system/kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" Apr 12 18:44:36.874544 sudo[2049]: pam_unix(sudo:session): session closed for user root Apr 12 18:44:37.034712 kubelet[2038]: I0412 18:44:37.034656 2038 apiserver.go:52] "Watching apiserver" Apr 12 18:44:37.090476 kubelet[2038]: I0412 18:44:37.090435 2038 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Apr 12 18:44:37.098046 kubelet[2038]: I0412 18:44:37.098009 2038 reconciler.go:41] "Reconciler: start to sync state" Apr 12 18:44:37.298802 kubelet[2038]: I0412 18:44:37.298666 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" podStartSLOduration=1.2985823810000001 podCreationTimestamp="2024-04-12 18:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:44:37.296117087 +0000 UTC m=+1.439507516" watchObservedRunningTime="2024-04-12 18:44:37.298582381 +0000 UTC m=+1.441972801" Apr 12 18:44:37.327719 kubelet[2038]: I0412 18:44:37.327675 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" podStartSLOduration=1.327608629 podCreationTimestamp="2024-04-12 18:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:44:37.310841502 +0000 UTC m=+1.454231931" watchObservedRunningTime="2024-04-12 18:44:37.327608629 +0000 UTC m=+1.470999053" Apr 12 18:44:37.328000 kubelet[2038]: I0412 18:44:37.327819 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" podStartSLOduration=1.3277921400000001 podCreationTimestamp="2024-04-12 18:44:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:44:37.32525235 +0000 UTC m=+1.468642778" watchObservedRunningTime="2024-04-12 18:44:37.32779214 +0000 UTC m=+1.471182569" Apr 12 18:44:38.245433 sudo[1309]: pam_unix(sudo:session): session closed for user root Apr 12 18:44:38.298639 sshd[1306]: pam_unix(sshd:session): session closed for user core Apr 12 18:44:38.304340 systemd[1]: sshd@4-10.128.0.15:22-139.178.89.65:41840.service: Deactivated successfully. Apr 12 18:44:38.305399 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:44:38.305597 systemd[1]: session-5.scope: Consumed 5.722s CPU time. Apr 12 18:44:38.307315 systemd-logind[1124]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:44:38.308860 systemd-logind[1124]: Removed session 5. Apr 12 18:44:48.401121 kubelet[2038]: I0412 18:44:48.401091 2038 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:44:48.402635 env[1143]: time="2024-04-12T18:44:48.402582711Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:44:48.403548 kubelet[2038]: I0412 18:44:48.403517 2038 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:44:48.550797 kubelet[2038]: I0412 18:44:48.550750 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:48.559225 systemd[1]: Created slice kubepods-besteffort-pod1038ff39_de7c_49de_815a_0137c49de7b1.slice. Apr 12 18:44:48.571758 kubelet[2038]: I0412 18:44:48.571710 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1038ff39-de7c-49de-815a-0137c49de7b1-xtables-lock\") pod \"kube-proxy-h4vcw\" (UID: \"1038ff39-de7c-49de-815a-0137c49de7b1\") " pod="kube-system/kube-proxy-h4vcw" Apr 12 18:44:48.572088 kubelet[2038]: I0412 18:44:48.572065 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1038ff39-de7c-49de-815a-0137c49de7b1-lib-modules\") pod \"kube-proxy-h4vcw\" (UID: \"1038ff39-de7c-49de-815a-0137c49de7b1\") " pod="kube-system/kube-proxy-h4vcw" Apr 12 18:44:48.572331 kubelet[2038]: I0412 18:44:48.572300 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1038ff39-de7c-49de-815a-0137c49de7b1-kube-proxy\") pod \"kube-proxy-h4vcw\" (UID: \"1038ff39-de7c-49de-815a-0137c49de7b1\") " pod="kube-system/kube-proxy-h4vcw" Apr 12 18:44:48.572536 kubelet[2038]: I0412 18:44:48.572501 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvknn\" (UniqueName: \"kubernetes.io/projected/1038ff39-de7c-49de-815a-0137c49de7b1-kube-api-access-xvknn\") pod \"kube-proxy-h4vcw\" (UID: \"1038ff39-de7c-49de-815a-0137c49de7b1\") " pod="kube-system/kube-proxy-h4vcw" Apr 12 18:44:48.651400 kubelet[2038]: I0412 18:44:48.651253 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:48.673155 kubelet[2038]: I0412 18:44:48.673105 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cni-path\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673366 kubelet[2038]: I0412 18:44:48.673185 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673366 kubelet[2038]: I0412 18:44:48.673224 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-bpf-maps\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673366 kubelet[2038]: I0412 18:44:48.673305 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hostproc\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673366 kubelet[2038]: I0412 18:44:48.673356 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-lib-modules\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673595 kubelet[2038]: I0412 18:44:48.673399 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-kernel\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673595 kubelet[2038]: I0412 18:44:48.673444 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-run\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673595 kubelet[2038]: I0412 18:44:48.673487 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-etc-cni-netd\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673595 kubelet[2038]: I0412 18:44:48.673542 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673595 kubelet[2038]: I0412 18:44:48.673588 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673858 kubelet[2038]: I0412 18:44:48.673683 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-cgroup\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673858 kubelet[2038]: I0412 18:44:48.673729 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-xtables-lock\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673858 kubelet[2038]: I0412 18:44:48.673800 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-net\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.673858 kubelet[2038]: I0412 18:44:48.673851 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f82xr\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-kube-api-access-f82xr\") pod \"cilium-642qv\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " pod="kube-system/cilium-642qv" Apr 12 18:44:48.680407 kubelet[2038]: W0412 18:44:48.680353 2038 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.680407 kubelet[2038]: E0412 18:44:48.680415 2038 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.680663 kubelet[2038]: W0412 18:44:48.680522 2038 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.680663 kubelet[2038]: E0412 18:44:48.680543 2038 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.680663 kubelet[2038]: W0412 18:44:48.680598 2038 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.680663 kubelet[2038]: E0412 18:44:48.680613 2038 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:44:48.690096 systemd[1]: Created slice kubepods-burstable-podb98aae52_9598_4a7b_b7b5_ea860ea0f989.slice. Apr 12 18:44:48.738203 kubelet[2038]: E0412 18:44:48.738154 2038 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 12 18:44:48.738203 kubelet[2038]: E0412 18:44:48.738205 2038 projected.go:198] Error preparing data for projected volume kube-api-access-xvknn for pod kube-system/kube-proxy-h4vcw: configmap "kube-root-ca.crt" not found Apr 12 18:44:48.738815 kubelet[2038]: E0412 18:44:48.738297 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1038ff39-de7c-49de-815a-0137c49de7b1-kube-api-access-xvknn podName:1038ff39-de7c-49de-815a-0137c49de7b1 nodeName:}" failed. No retries permitted until 2024-04-12 18:44:49.238270024 +0000 UTC m=+13.381660445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvknn" (UniqueName: "kubernetes.io/projected/1038ff39-de7c-49de-815a-0137c49de7b1-kube-api-access-xvknn") pod "kube-proxy-h4vcw" (UID: "1038ff39-de7c-49de-815a-0137c49de7b1") : configmap "kube-root-ca.crt" not found Apr 12 18:44:48.764980 kubelet[2038]: I0412 18:44:48.764927 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:44:48.773032 systemd[1]: Created slice kubepods-besteffort-pod1a09c3dc_0386_43a8_81b8_b82ea89ef32b.slice. Apr 12 18:44:48.774341 kubelet[2038]: I0412 18:44:48.774301 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path\") pod \"cilium-operator-574c4bb98d-vq62v\" (UID: \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\") " pod="kube-system/cilium-operator-574c4bb98d-vq62v" Apr 12 18:44:48.774655 kubelet[2038]: I0412 18:44:48.774625 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxxh\" (UniqueName: \"kubernetes.io/projected/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-kube-api-access-qkxxh\") pod \"cilium-operator-574c4bb98d-vq62v\" (UID: \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\") " pod="kube-system/cilium-operator-574c4bb98d-vq62v" Apr 12 18:44:49.471327 env[1143]: time="2024-04-12T18:44:49.471261900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4vcw,Uid:1038ff39-de7c-49de-815a-0137c49de7b1,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:49.501559 env[1143]: time="2024-04-12T18:44:49.501451872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:49.501820 env[1143]: time="2024-04-12T18:44:49.501507796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:49.501820 env[1143]: time="2024-04-12T18:44:49.501525156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:49.502084 env[1143]: time="2024-04-12T18:44:49.501853330Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cda6a2e7d52578933d730c2cba7d780b7a1d39a4403292ec9e516da88898693 pid=2119 runtime=io.containerd.runc.v2 Apr 12 18:44:49.528408 systemd[1]: Started cri-containerd-2cda6a2e7d52578933d730c2cba7d780b7a1d39a4403292ec9e516da88898693.scope. Apr 12 18:44:49.567603 env[1143]: time="2024-04-12T18:44:49.567530354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h4vcw,Uid:1038ff39-de7c-49de-815a-0137c49de7b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cda6a2e7d52578933d730c2cba7d780b7a1d39a4403292ec9e516da88898693\"" Apr 12 18:44:49.572406 env[1143]: time="2024-04-12T18:44:49.572298052Z" level=info msg="CreateContainer within sandbox \"2cda6a2e7d52578933d730c2cba7d780b7a1d39a4403292ec9e516da88898693\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:44:49.597422 env[1143]: time="2024-04-12T18:44:49.597338113Z" level=info msg="CreateContainer within sandbox \"2cda6a2e7d52578933d730c2cba7d780b7a1d39a4403292ec9e516da88898693\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91917fd2f4d4d3b202ac721dfca5cc683066eee9576c3bde29ca9af43050bae8\"" Apr 12 18:44:49.600722 env[1143]: time="2024-04-12T18:44:49.598615320Z" level=info msg="StartContainer for \"91917fd2f4d4d3b202ac721dfca5cc683066eee9576c3bde29ca9af43050bae8\"" Apr 12 18:44:49.627144 systemd[1]: Started cri-containerd-91917fd2f4d4d3b202ac721dfca5cc683066eee9576c3bde29ca9af43050bae8.scope. Apr 12 18:44:49.680749 env[1143]: time="2024-04-12T18:44:49.680428733Z" level=info msg="StartContainer for \"91917fd2f4d4d3b202ac721dfca5cc683066eee9576c3bde29ca9af43050bae8\" returns successfully" Apr 12 18:44:49.775599 kubelet[2038]: E0412 18:44:49.775444 2038 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 12 18:44:49.775599 kubelet[2038]: E0412 18:44:49.775566 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets podName:b98aae52-9598-4a7b-b7b5-ea860ea0f989 nodeName:}" failed. No retries permitted until 2024-04-12 18:44:50.275540244 +0000 UTC m=+14.418930675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets") pod "cilium-642qv" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:44:49.776274 kubelet[2038]: E0412 18:44:49.775612 2038 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 12 18:44:49.776274 kubelet[2038]: E0412 18:44:49.775629 2038 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-642qv: failed to sync secret cache: timed out waiting for the condition Apr 12 18:44:49.776274 kubelet[2038]: E0412 18:44:49.775668 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls podName:b98aae52-9598-4a7b-b7b5-ea860ea0f989 nodeName:}" failed. No retries permitted until 2024-04-12 18:44:50.275655483 +0000 UTC m=+14.419045906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls") pod "cilium-642qv" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:44:49.776274 kubelet[2038]: E0412 18:44:49.775830 2038 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:44:49.776274 kubelet[2038]: E0412 18:44:49.775881 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path podName:b98aae52-9598-4a7b-b7b5-ea860ea0f989 nodeName:}" failed. No retries permitted until 2024-04-12 18:44:50.275866865 +0000 UTC m=+14.419257284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path") pod "cilium-642qv" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:44:49.876340 kubelet[2038]: E0412 18:44:49.876284 2038 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:44:49.876582 kubelet[2038]: E0412 18:44:49.876444 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path podName:1a09c3dc-0386-43a8-81b8-b82ea89ef32b nodeName:}" failed. No retries permitted until 2024-04-12 18:44:50.376374104 +0000 UTC m=+14.519764530 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path") pod "cilium-operator-574c4bb98d-vq62v" (UID: "1a09c3dc-0386-43a8-81b8-b82ea89ef32b") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:44:50.321487 kubelet[2038]: I0412 18:44:50.321448 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-h4vcw" podStartSLOduration=2.321374672 podCreationTimestamp="2024-04-12 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:44:50.321097465 +0000 UTC m=+14.464487886" watchObservedRunningTime="2024-04-12 18:44:50.321374672 +0000 UTC m=+14.464765099" Apr 12 18:44:50.495631 env[1143]: time="2024-04-12T18:44:50.495541088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-642qv,Uid:b98aae52-9598-4a7b-b7b5-ea860ea0f989,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:50.525133 env[1143]: time="2024-04-12T18:44:50.525006436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:50.525393 env[1143]: time="2024-04-12T18:44:50.525080287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:50.525393 env[1143]: time="2024-04-12T18:44:50.525098623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:50.525393 env[1143]: time="2024-04-12T18:44:50.525333151Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc pid=2318 runtime=io.containerd.runc.v2 Apr 12 18:44:50.550818 systemd[1]: Started cri-containerd-9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc.scope. Apr 12 18:44:50.580786 env[1143]: time="2024-04-12T18:44:50.580727933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vq62v,Uid:1a09c3dc-0386-43a8-81b8-b82ea89ef32b,Namespace:kube-system,Attempt:0,}" Apr 12 18:44:50.593287 env[1143]: time="2024-04-12T18:44:50.593224563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-642qv,Uid:b98aae52-9598-4a7b-b7b5-ea860ea0f989,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\"" Apr 12 18:44:50.599781 kubelet[2038]: E0412 18:44:50.599706 2038 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Apr 12 18:44:50.600407 env[1143]: time="2024-04-12T18:44:50.600358157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:44:50.614346 env[1143]: time="2024-04-12T18:44:50.614203656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:44:50.614346 env[1143]: time="2024-04-12T18:44:50.614269539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:44:50.614346 env[1143]: time="2024-04-12T18:44:50.614289101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:44:50.615125 env[1143]: time="2024-04-12T18:44:50.615026930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222 pid=2360 runtime=io.containerd.runc.v2 Apr 12 18:44:50.636305 systemd[1]: Started cri-containerd-dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222.scope. Apr 12 18:44:50.715808 env[1143]: time="2024-04-12T18:44:50.715299762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-vq62v,Uid:1a09c3dc-0386-43a8-81b8-b82ea89ef32b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\"" Apr 12 18:44:57.151415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270760666.mount: Deactivated successfully. Apr 12 18:45:00.571030 env[1143]: time="2024-04-12T18:45:00.570949693Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:00.574250 env[1143]: time="2024-04-12T18:45:00.574201710Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:00.577038 env[1143]: time="2024-04-12T18:45:00.576981674Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:00.578609 env[1143]: time="2024-04-12T18:45:00.578528410Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:45:00.584108 env[1143]: time="2024-04-12T18:45:00.584038075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:45:00.587372 env[1143]: time="2024-04-12T18:45:00.587320975Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:45:00.610328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1055490491.mount: Deactivated successfully. Apr 12 18:45:00.621361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400092354.mount: Deactivated successfully. Apr 12 18:45:00.626267 env[1143]: time="2024-04-12T18:45:00.626202394Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\"" Apr 12 18:45:00.627361 env[1143]: time="2024-04-12T18:45:00.627300551Z" level=info msg="StartContainer for \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\"" Apr 12 18:45:00.658255 systemd[1]: Started cri-containerd-e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af.scope. Apr 12 18:45:00.702279 env[1143]: time="2024-04-12T18:45:00.702220022Z" level=info msg="StartContainer for \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\" returns successfully" Apr 12 18:45:00.719329 systemd[1]: cri-containerd-e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af.scope: Deactivated successfully. Apr 12 18:45:01.605833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af-rootfs.mount: Deactivated successfully. Apr 12 18:45:02.543802 env[1143]: time="2024-04-12T18:45:02.543732623Z" level=info msg="shim disconnected" id=e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af Apr 12 18:45:02.543802 env[1143]: time="2024-04-12T18:45:02.543802878Z" level=warning msg="cleaning up after shim disconnected" id=e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af namespace=k8s.io Apr 12 18:45:02.544542 env[1143]: time="2024-04-12T18:45:02.543817737Z" level=info msg="cleaning up dead shim" Apr 12 18:45:02.556040 env[1143]: time="2024-04-12T18:45:02.555967805Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:45:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2445 runtime=io.containerd.runc.v2\n" Apr 12 18:45:03.241192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836857593.mount: Deactivated successfully. Apr 12 18:45:03.366849 env[1143]: time="2024-04-12T18:45:03.366796091Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:45:03.401439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893653316.mount: Deactivated successfully. Apr 12 18:45:03.409059 env[1143]: time="2024-04-12T18:45:03.408822808Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\"" Apr 12 18:45:03.413961 env[1143]: time="2024-04-12T18:45:03.413875137Z" level=info msg="StartContainer for \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\"" Apr 12 18:45:03.456339 systemd[1]: Started cri-containerd-841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1.scope. Apr 12 18:45:03.529589 env[1143]: time="2024-04-12T18:45:03.528808825Z" level=info msg="StartContainer for \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\" returns successfully" Apr 12 18:45:03.542415 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:45:03.542880 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:45:03.544164 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:45:03.550202 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:45:03.557202 systemd[1]: cri-containerd-841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1.scope: Deactivated successfully. Apr 12 18:45:03.565383 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:45:03.631864 env[1143]: time="2024-04-12T18:45:03.631797598Z" level=info msg="shim disconnected" id=841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1 Apr 12 18:45:03.631864 env[1143]: time="2024-04-12T18:45:03.631861636Z" level=warning msg="cleaning up after shim disconnected" id=841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1 namespace=k8s.io Apr 12 18:45:03.632603 env[1143]: time="2024-04-12T18:45:03.631876443Z" level=info msg="cleaning up dead shim" Apr 12 18:45:03.657260 env[1143]: time="2024-04-12T18:45:03.657206652Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:45:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2512 runtime=io.containerd.runc.v2\n" Apr 12 18:45:04.269338 env[1143]: time="2024-04-12T18:45:04.269276808Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:04.272488 env[1143]: time="2024-04-12T18:45:04.272408091Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:04.275054 env[1143]: time="2024-04-12T18:45:04.275001647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:45:04.275973 env[1143]: time="2024-04-12T18:45:04.275915021Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:45:04.281405 env[1143]: time="2024-04-12T18:45:04.281327172Z" level=info msg="CreateContainer within sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:45:04.309129 env[1143]: time="2024-04-12T18:45:04.309054852Z" level=info msg="CreateContainer within sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\"" Apr 12 18:45:04.312379 env[1143]: time="2024-04-12T18:45:04.312220889Z" level=info msg="StartContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\"" Apr 12 18:45:04.347378 systemd[1]: Started cri-containerd-f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e.scope. Apr 12 18:45:04.363237 env[1143]: time="2024-04-12T18:45:04.363145874Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:45:04.413476 env[1143]: time="2024-04-12T18:45:04.413394913Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\"" Apr 12 18:45:04.417712 env[1143]: time="2024-04-12T18:45:04.417544305Z" level=info msg="StartContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" returns successfully" Apr 12 18:45:04.422381 env[1143]: time="2024-04-12T18:45:04.422326787Z" level=info msg="StartContainer for \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\"" Apr 12 18:45:04.452855 systemd[1]: Started cri-containerd-7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11.scope. Apr 12 18:45:04.529923 env[1143]: time="2024-04-12T18:45:04.529770455Z" level=info msg="StartContainer for \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\" returns successfully" Apr 12 18:45:04.535916 systemd[1]: cri-containerd-7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11.scope: Deactivated successfully. Apr 12 18:45:04.717185 env[1143]: time="2024-04-12T18:45:04.717098059Z" level=info msg="shim disconnected" id=7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11 Apr 12 18:45:04.717185 env[1143]: time="2024-04-12T18:45:04.717176919Z" level=warning msg="cleaning up after shim disconnected" id=7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11 namespace=k8s.io Apr 12 18:45:04.717185 env[1143]: time="2024-04-12T18:45:04.717192003Z" level=info msg="cleaning up dead shim" Apr 12 18:45:04.740530 env[1143]: time="2024-04-12T18:45:04.740464781Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:45:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2609 runtime=io.containerd.runc.v2\n" Apr 12 18:45:05.368763 env[1143]: time="2024-04-12T18:45:05.368708844Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:45:05.400176 env[1143]: time="2024-04-12T18:45:05.400104312Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\"" Apr 12 18:45:05.410986 env[1143]: time="2024-04-12T18:45:05.405503566Z" level=info msg="StartContainer for \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\"" Apr 12 18:45:05.465233 systemd[1]: Started cri-containerd-725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02.scope. Apr 12 18:45:05.583386 env[1143]: time="2024-04-12T18:45:05.583313123Z" level=info msg="StartContainer for \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\" returns successfully" Apr 12 18:45:05.585833 systemd[1]: cri-containerd-725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02.scope: Deactivated successfully. Apr 12 18:45:05.628937 env[1143]: time="2024-04-12T18:45:05.628782431Z" level=info msg="shim disconnected" id=725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02 Apr 12 18:45:05.629772 env[1143]: time="2024-04-12T18:45:05.629733122Z" level=warning msg="cleaning up after shim disconnected" id=725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02 namespace=k8s.io Apr 12 18:45:05.630005 env[1143]: time="2024-04-12T18:45:05.629979962Z" level=info msg="cleaning up dead shim" Apr 12 18:45:05.675652 env[1143]: time="2024-04-12T18:45:05.675595130Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:45:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2665 runtime=io.containerd.runc.v2\n" Apr 12 18:45:06.224002 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02-rootfs.mount: Deactivated successfully. Apr 12 18:45:06.388929 env[1143]: time="2024-04-12T18:45:06.385934919Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:45:06.434809 kubelet[2038]: I0412 18:45:06.434397 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-vq62v" podStartSLOduration=4.875658248 podCreationTimestamp="2024-04-12 18:44:48 +0000 UTC" firstStartedPulling="2024-04-12 18:44:50.717749672 +0000 UTC m=+14.861140092" lastFinishedPulling="2024-04-12 18:45:04.276434968 +0000 UTC m=+28.419825392" observedRunningTime="2024-04-12 18:45:05.552052554 +0000 UTC m=+29.695442983" watchObservedRunningTime="2024-04-12 18:45:06.434343548 +0000 UTC m=+30.577733978" Apr 12 18:45:06.439125 env[1143]: time="2024-04-12T18:45:06.437792739Z" level=info msg="CreateContainer within sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\"" Apr 12 18:45:06.440185 env[1143]: time="2024-04-12T18:45:06.440142603Z" level=info msg="StartContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\"" Apr 12 18:45:06.478303 systemd[1]: Started cri-containerd-1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f.scope. Apr 12 18:45:06.541522 env[1143]: time="2024-04-12T18:45:06.541463950Z" level=info msg="StartContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" returns successfully" Apr 12 18:45:06.719836 kubelet[2038]: I0412 18:45:06.719800 2038 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Apr 12 18:45:06.802845 kubelet[2038]: I0412 18:45:06.802703 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:06.810068 kubelet[2038]: I0412 18:45:06.810035 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:45:06.810845 systemd[1]: Created slice kubepods-burstable-pod3ef32fa5_72eb_4b3b_8acc_c83e2f4d01fc.slice. Apr 12 18:45:06.821023 systemd[1]: Created slice kubepods-burstable-pod7df858c5_dbd4_4e38_ac46_5088a8cf1356.slice. Apr 12 18:45:06.833106 kubelet[2038]: W0412 18:45:06.833053 2038 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:45:06.833106 kubelet[2038]: E0412 18:45:06.833104 2038 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:45:06.912438 kubelet[2038]: I0412 18:45:06.912396 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7df858c5-dbd4-4e38-ac46-5088a8cf1356-config-volume\") pod \"coredns-5d78c9869d-4xmn5\" (UID: \"7df858c5-dbd4-4e38-ac46-5088a8cf1356\") " pod="kube-system/coredns-5d78c9869d-4xmn5" Apr 12 18:45:06.912776 kubelet[2038]: I0412 18:45:06.912755 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc-config-volume\") pod \"coredns-5d78c9869d-q54qj\" (UID: \"3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc\") " pod="kube-system/coredns-5d78c9869d-q54qj" Apr 12 18:45:06.913034 kubelet[2038]: I0412 18:45:06.913012 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbfd8\" (UniqueName: \"kubernetes.io/projected/3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc-kube-api-access-dbfd8\") pod \"coredns-5d78c9869d-q54qj\" (UID: \"3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc\") " pod="kube-system/coredns-5d78c9869d-q54qj" Apr 12 18:45:06.913280 kubelet[2038]: I0412 18:45:06.913261 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9hkt\" (UniqueName: \"kubernetes.io/projected/7df858c5-dbd4-4e38-ac46-5088a8cf1356-kube-api-access-k9hkt\") pod \"coredns-5d78c9869d-4xmn5\" (UID: \"7df858c5-dbd4-4e38-ac46-5088a8cf1356\") " pod="kube-system/coredns-5d78c9869d-4xmn5" Apr 12 18:45:08.016409 env[1143]: time="2024-04-12T18:45:08.016349444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-q54qj,Uid:3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc,Namespace:kube-system,Attempt:0,}" Apr 12 18:45:08.026997 env[1143]: time="2024-04-12T18:45:08.026931400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-4xmn5,Uid:7df858c5-dbd4-4e38-ac46-5088a8cf1356,Namespace:kube-system,Attempt:0,}" Apr 12 18:45:08.901360 systemd-networkd[1023]: cilium_host: Link UP Apr 12 18:45:08.910274 systemd-networkd[1023]: cilium_net: Link UP Apr 12 18:45:08.913411 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Apr 12 18:45:08.913566 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:45:08.913111 systemd-networkd[1023]: cilium_net: Gained carrier Apr 12 18:45:08.919027 systemd-networkd[1023]: cilium_host: Gained carrier Apr 12 18:45:08.929178 systemd-networkd[1023]: cilium_host: Gained IPv6LL Apr 12 18:45:09.086426 systemd-networkd[1023]: cilium_vxlan: Link UP Apr 12 18:45:09.086437 systemd-networkd[1023]: cilium_vxlan: Gained carrier Apr 12 18:45:09.360980 kernel: NET: Registered PF_ALG protocol family Apr 12 18:45:09.715572 systemd-networkd[1023]: cilium_net: Gained IPv6LL Apr 12 18:45:10.290063 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Apr 12 18:45:10.317793 systemd-networkd[1023]: lxc_health: Link UP Apr 12 18:45:10.352052 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:45:10.352855 systemd-networkd[1023]: lxc_health: Gained carrier Apr 12 18:45:10.525784 kubelet[2038]: I0412 18:45:10.525718 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-642qv" podStartSLOduration=12.541670516 podCreationTimestamp="2024-04-12 18:44:48 +0000 UTC" firstStartedPulling="2024-04-12 18:44:50.595163406 +0000 UTC m=+14.738553818" lastFinishedPulling="2024-04-12 18:45:00.579158617 +0000 UTC m=+24.722549021" observedRunningTime="2024-04-12 18:45:07.404048708 +0000 UTC m=+31.547439136" watchObservedRunningTime="2024-04-12 18:45:10.525665719 +0000 UTC m=+34.669056149" Apr 12 18:45:10.622057 systemd-networkd[1023]: lxc6ba9a08636b1: Link UP Apr 12 18:45:10.635027 kernel: eth0: renamed from tmp3ceb9 Apr 12 18:45:10.655985 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6ba9a08636b1: link becomes ready Apr 12 18:45:10.657513 systemd-networkd[1023]: lxc6ba9a08636b1: Gained carrier Apr 12 18:45:10.661547 systemd-networkd[1023]: lxc57251ecd9ade: Link UP Apr 12 18:45:10.675805 kernel: eth0: renamed from tmp22374 Apr 12 18:45:10.697009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc57251ecd9ade: link becomes ready Apr 12 18:45:10.703156 systemd-networkd[1023]: lxc57251ecd9ade: Gained carrier Apr 12 18:45:12.082201 systemd-networkd[1023]: lxc57251ecd9ade: Gained IPv6LL Apr 12 18:45:12.338724 systemd-networkd[1023]: lxc_health: Gained IPv6LL Apr 12 18:45:12.658748 systemd-networkd[1023]: lxc6ba9a08636b1: Gained IPv6LL Apr 12 18:45:15.861533 env[1143]: time="2024-04-12T18:45:15.861433013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:45:15.862204 env[1143]: time="2024-04-12T18:45:15.862156903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:45:15.862372 env[1143]: time="2024-04-12T18:45:15.862337891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:45:15.862726 env[1143]: time="2024-04-12T18:45:15.862674898Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ceb9efb4e68fdf6159d0a28641317cc3579af5fa665d88fdb1f8860bcd65764 pid=3207 runtime=io.containerd.runc.v2 Apr 12 18:45:15.886496 env[1143]: time="2024-04-12T18:45:15.886364842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:45:15.886687 env[1143]: time="2024-04-12T18:45:15.886501831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:45:15.886687 env[1143]: time="2024-04-12T18:45:15.886541914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:45:15.886840 env[1143]: time="2024-04-12T18:45:15.886768055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/223748a1880c0f86563662896847492ecabe12ca3f210b921a0a0045f15af82a pid=3212 runtime=io.containerd.runc.v2 Apr 12 18:45:15.926116 systemd[1]: Started cri-containerd-3ceb9efb4e68fdf6159d0a28641317cc3579af5fa665d88fdb1f8860bcd65764.scope. Apr 12 18:45:15.946686 systemd[1]: Started cri-containerd-223748a1880c0f86563662896847492ecabe12ca3f210b921a0a0045f15af82a.scope. Apr 12 18:45:16.040943 env[1143]: time="2024-04-12T18:45:16.040832900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-q54qj,Uid:3ef32fa5-72eb-4b3b-8acc-c83e2f4d01fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ceb9efb4e68fdf6159d0a28641317cc3579af5fa665d88fdb1f8860bcd65764\"" Apr 12 18:45:16.057223 env[1143]: time="2024-04-12T18:45:16.057157771Z" level=info msg="CreateContainer within sandbox \"3ceb9efb4e68fdf6159d0a28641317cc3579af5fa665d88fdb1f8860bcd65764\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:45:16.093963 env[1143]: time="2024-04-12T18:45:16.093832706Z" level=info msg="CreateContainer within sandbox \"3ceb9efb4e68fdf6159d0a28641317cc3579af5fa665d88fdb1f8860bcd65764\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a6b22d96485a07507d0d8cf5cf2428696c919906a40671e923e1f2b6fc73162\"" Apr 12 18:45:16.095057 env[1143]: time="2024-04-12T18:45:16.095009978Z" level=info msg="StartContainer for \"8a6b22d96485a07507d0d8cf5cf2428696c919906a40671e923e1f2b6fc73162\"" Apr 12 18:45:16.141308 systemd[1]: Started cri-containerd-8a6b22d96485a07507d0d8cf5cf2428696c919906a40671e923e1f2b6fc73162.scope. Apr 12 18:45:16.150252 env[1143]: time="2024-04-12T18:45:16.150195589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-4xmn5,Uid:7df858c5-dbd4-4e38-ac46-5088a8cf1356,Namespace:kube-system,Attempt:0,} returns sandbox id \"223748a1880c0f86563662896847492ecabe12ca3f210b921a0a0045f15af82a\"" Apr 12 18:45:16.156019 env[1143]: time="2024-04-12T18:45:16.155394554Z" level=info msg="CreateContainer within sandbox \"223748a1880c0f86563662896847492ecabe12ca3f210b921a0a0045f15af82a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:45:16.189125 env[1143]: time="2024-04-12T18:45:16.189047615Z" level=info msg="CreateContainer within sandbox \"223748a1880c0f86563662896847492ecabe12ca3f210b921a0a0045f15af82a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58b79a49ffb3066a70ec9fa3752f038b22f031601d773c193e8e4eef55adcfbc\"" Apr 12 18:45:16.190345 env[1143]: time="2024-04-12T18:45:16.190283959Z" level=info msg="StartContainer for \"58b79a49ffb3066a70ec9fa3752f038b22f031601d773c193e8e4eef55adcfbc\"" Apr 12 18:45:16.226251 env[1143]: time="2024-04-12T18:45:16.226185413Z" level=info msg="StartContainer for \"8a6b22d96485a07507d0d8cf5cf2428696c919906a40671e923e1f2b6fc73162\" returns successfully" Apr 12 18:45:16.238873 systemd[1]: Started cri-containerd-58b79a49ffb3066a70ec9fa3752f038b22f031601d773c193e8e4eef55adcfbc.scope. Apr 12 18:45:16.327876 env[1143]: time="2024-04-12T18:45:16.327739454Z" level=info msg="StartContainer for \"58b79a49ffb3066a70ec9fa3752f038b22f031601d773c193e8e4eef55adcfbc\" returns successfully" Apr 12 18:45:16.440932 kubelet[2038]: I0412 18:45:16.440772 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-4xmn5" podStartSLOduration=28.440701074 podCreationTimestamp="2024-04-12 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:45:16.435647513 +0000 UTC m=+40.579037951" watchObservedRunningTime="2024-04-12 18:45:16.440701074 +0000 UTC m=+40.584091503" Apr 12 18:45:16.465656 kubelet[2038]: I0412 18:45:16.465616 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-q54qj" podStartSLOduration=28.465478877 podCreationTimestamp="2024-04-12 18:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:45:16.464471033 +0000 UTC m=+40.607861466" watchObservedRunningTime="2024-04-12 18:45:16.465478877 +0000 UTC m=+40.608869296" Apr 12 18:45:16.873964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296045603.mount: Deactivated successfully. Apr 12 18:45:38.263212 systemd[1]: Started sshd@5-10.128.0.15:22-139.178.89.65:52786.service. Apr 12 18:45:38.610411 sshd[3365]: Accepted publickey for core from 139.178.89.65 port 52786 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:38.612592 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:38.620210 systemd[1]: Started session-6.scope. Apr 12 18:45:38.620943 systemd-logind[1124]: New session 6 of user core. Apr 12 18:45:38.947131 sshd[3365]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:38.952318 systemd[1]: sshd@5-10.128.0.15:22-139.178.89.65:52786.service: Deactivated successfully. Apr 12 18:45:38.953623 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:45:38.954781 systemd-logind[1124]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:45:38.956082 systemd-logind[1124]: Removed session 6. Apr 12 18:45:44.005155 systemd[1]: Started sshd@6-10.128.0.15:22-139.178.89.65:52794.service. Apr 12 18:45:44.170003 systemd[1]: Started sshd@7-10.128.0.15:22-182.52.90.208:50604.service. Apr 12 18:45:44.353609 sshd[3379]: Accepted publickey for core from 139.178.89.65 port 52794 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:44.355655 sshd[3379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:44.363280 systemd[1]: Started session-7.scope. Apr 12 18:45:44.364091 systemd-logind[1124]: New session 7 of user core. Apr 12 18:45:44.675203 sshd[3379]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:44.679936 systemd-logind[1124]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:45:44.680428 systemd[1]: sshd@6-10.128.0.15:22-139.178.89.65:52794.service: Deactivated successfully. Apr 12 18:45:44.681560 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:45:44.682772 systemd-logind[1124]: Removed session 7. Apr 12 18:45:44.905601 sshd[3382]: kex_exchange_identification: banner line contains invalid characters Apr 12 18:45:44.905601 sshd[3382]: banner exchange: Connection from 182.52.90.208 port 50604: invalid format Apr 12 18:45:44.906481 systemd[1]: sshd@7-10.128.0.15:22-182.52.90.208:50604.service: Deactivated successfully. Apr 12 18:45:49.731435 systemd[1]: Started sshd@8-10.128.0.15:22-139.178.89.65:46278.service. Apr 12 18:45:50.078335 sshd[3395]: Accepted publickey for core from 139.178.89.65 port 46278 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:50.080124 sshd[3395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:50.087237 systemd[1]: Started session-8.scope. Apr 12 18:45:50.087990 systemd-logind[1124]: New session 8 of user core. Apr 12 18:45:50.408699 sshd[3395]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:50.413423 systemd[1]: sshd@8-10.128.0.15:22-139.178.89.65:46278.service: Deactivated successfully. Apr 12 18:45:50.414577 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:45:50.415654 systemd-logind[1124]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:45:50.417087 systemd-logind[1124]: Removed session 8. Apr 12 18:45:51.545046 systemd[1]: Started sshd@9-10.128.0.15:22-182.52.90.208:58899.service. Apr 12 18:45:55.464586 systemd[1]: Started sshd@10-10.128.0.15:22-139.178.89.65:46286.service. Apr 12 18:45:55.813501 sshd[3413]: Accepted publickey for core from 139.178.89.65 port 46286 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:55.815843 sshd[3413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:55.825835 systemd-logind[1124]: New session 9 of user core. Apr 12 18:45:55.827204 systemd[1]: Started session-9.scope. Apr 12 18:45:56.139055 sshd[3413]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:56.143836 systemd-logind[1124]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:45:56.144214 systemd[1]: sshd@10-10.128.0.15:22-139.178.89.65:46286.service: Deactivated successfully. Apr 12 18:45:56.145400 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:45:56.146663 systemd-logind[1124]: Removed session 9. Apr 12 18:45:56.194762 systemd[1]: Started sshd@11-10.128.0.15:22-139.178.89.65:46292.service. Apr 12 18:45:56.541554 sshd[3427]: Accepted publickey for core from 139.178.89.65 port 46292 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:56.543110 sshd[3427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:56.549992 systemd[1]: Started session-10.scope. Apr 12 18:45:56.550977 systemd-logind[1124]: New session 10 of user core. Apr 12 18:45:57.683343 sshd[3427]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:57.688198 systemd[1]: sshd@11-10.128.0.15:22-139.178.89.65:46292.service: Deactivated successfully. Apr 12 18:45:57.689468 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:45:57.690594 systemd-logind[1124]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:45:57.692218 systemd-logind[1124]: Removed session 10. Apr 12 18:45:57.738414 systemd[1]: Started sshd@12-10.128.0.15:22-139.178.89.65:54604.service. Apr 12 18:45:57.874230 sshd[3410]: Received disconnect from 182.52.90.208 port 58899:11: Bye Bye [preauth] Apr 12 18:45:57.874788 sshd[3410]: Disconnected from 182.52.90.208 port 58899 [preauth] Apr 12 18:45:57.875448 systemd[1]: sshd@9-10.128.0.15:22-182.52.90.208:58899.service: Deactivated successfully. Apr 12 18:45:58.079668 sshd[3438]: Accepted publickey for core from 139.178.89.65 port 54604 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:45:58.082303 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:45:58.091198 systemd-logind[1124]: New session 11 of user core. Apr 12 18:45:58.093117 systemd[1]: Started session-11.scope. Apr 12 18:45:58.404531 sshd[3438]: pam_unix(sshd:session): session closed for user core Apr 12 18:45:58.408874 systemd[1]: sshd@12-10.128.0.15:22-139.178.89.65:54604.service: Deactivated successfully. Apr 12 18:45:58.410087 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:45:58.410965 systemd-logind[1124]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:45:58.412262 systemd-logind[1124]: Removed session 11. Apr 12 18:46:03.463446 systemd[1]: Started sshd@13-10.128.0.15:22-139.178.89.65:54606.service. Apr 12 18:46:03.813538 sshd[3451]: Accepted publickey for core from 139.178.89.65 port 54606 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:03.815993 sshd[3451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:03.823191 systemd-logind[1124]: New session 12 of user core. Apr 12 18:46:03.824096 systemd[1]: Started session-12.scope. Apr 12 18:46:04.142265 sshd[3451]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:04.146946 systemd[1]: sshd@13-10.128.0.15:22-139.178.89.65:54606.service: Deactivated successfully. Apr 12 18:46:04.148187 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:46:04.149359 systemd-logind[1124]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:46:04.150764 systemd-logind[1124]: Removed session 12. Apr 12 18:46:09.197351 systemd[1]: Started sshd@14-10.128.0.15:22-139.178.89.65:49642.service. Apr 12 18:46:09.540475 sshd[3466]: Accepted publickey for core from 139.178.89.65 port 49642 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:09.541650 sshd[3466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:09.548645 systemd[1]: Started session-13.scope. Apr 12 18:46:09.549273 systemd-logind[1124]: New session 13 of user core. Apr 12 18:46:09.859650 sshd[3466]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:09.864271 systemd[1]: sshd@14-10.128.0.15:22-139.178.89.65:49642.service: Deactivated successfully. Apr 12 18:46:09.865531 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:46:09.866612 systemd-logind[1124]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:46:09.868085 systemd-logind[1124]: Removed session 13. Apr 12 18:46:09.916836 systemd[1]: Started sshd@15-10.128.0.15:22-139.178.89.65:49650.service. Apr 12 18:46:10.264441 sshd[3478]: Accepted publickey for core from 139.178.89.65 port 49650 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:10.266933 sshd[3478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:10.273354 systemd-logind[1124]: New session 14 of user core. Apr 12 18:46:10.274167 systemd[1]: Started session-14.scope. Apr 12 18:46:10.693560 sshd[3478]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:10.698456 systemd-logind[1124]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:46:10.698827 systemd[1]: sshd@15-10.128.0.15:22-139.178.89.65:49650.service: Deactivated successfully. Apr 12 18:46:10.700094 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:46:10.701407 systemd-logind[1124]: Removed session 14. Apr 12 18:46:10.748411 systemd[1]: Started sshd@16-10.128.0.15:22-139.178.89.65:49662.service. Apr 12 18:46:11.092197 sshd[3488]: Accepted publickey for core from 139.178.89.65 port 49662 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:11.094249 sshd[3488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:11.101411 systemd[1]: Started session-15.scope. Apr 12 18:46:11.102082 systemd-logind[1124]: New session 15 of user core. Apr 12 18:46:12.235344 sshd[3488]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:12.239899 systemd-logind[1124]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:46:12.242157 systemd[1]: sshd@16-10.128.0.15:22-139.178.89.65:49662.service: Deactivated successfully. Apr 12 18:46:12.243361 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:46:12.245783 systemd-logind[1124]: Removed session 15. Apr 12 18:46:12.293054 systemd[1]: Started sshd@17-10.128.0.15:22-139.178.89.65:49668.service. Apr 12 18:46:12.633471 sshd[3506]: Accepted publickey for core from 139.178.89.65 port 49668 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:12.635390 sshd[3506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:12.641988 systemd-logind[1124]: New session 16 of user core. Apr 12 18:46:12.643217 systemd[1]: Started session-16.scope. Apr 12 18:46:13.266279 sshd[3506]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:13.271582 systemd[1]: sshd@17-10.128.0.15:22-139.178.89.65:49668.service: Deactivated successfully. Apr 12 18:46:13.272823 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:46:13.273840 systemd-logind[1124]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:46:13.275205 systemd-logind[1124]: Removed session 16. Apr 12 18:46:13.322942 systemd[1]: Started sshd@18-10.128.0.15:22-139.178.89.65:49678.service. Apr 12 18:46:13.669607 sshd[3516]: Accepted publickey for core from 139.178.89.65 port 49678 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:13.671673 sshd[3516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:13.678575 systemd[1]: Started session-17.scope. Apr 12 18:46:13.679987 systemd-logind[1124]: New session 17 of user core. Apr 12 18:46:13.988288 sshd[3516]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:13.993207 systemd[1]: sshd@18-10.128.0.15:22-139.178.89.65:49678.service: Deactivated successfully. Apr 12 18:46:13.994444 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:46:13.995575 systemd-logind[1124]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:46:13.996869 systemd-logind[1124]: Removed session 17. Apr 12 18:46:19.045500 systemd[1]: Started sshd@19-10.128.0.15:22-139.178.89.65:42668.service. Apr 12 18:46:19.398088 sshd[3532]: Accepted publickey for core from 139.178.89.65 port 42668 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:19.400134 sshd[3532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:19.407032 systemd-logind[1124]: New session 18 of user core. Apr 12 18:46:19.407408 systemd[1]: Started session-18.scope. Apr 12 18:46:19.726536 sshd[3532]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:19.731222 systemd[1]: sshd@19-10.128.0.15:22-139.178.89.65:42668.service: Deactivated successfully. Apr 12 18:46:19.732331 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:46:19.733892 systemd-logind[1124]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:46:19.735835 systemd-logind[1124]: Removed session 18. Apr 12 18:46:24.781876 systemd[1]: Started sshd@20-10.128.0.15:22-139.178.89.65:42674.service. Apr 12 18:46:25.128248 sshd[3547]: Accepted publickey for core from 139.178.89.65 port 42674 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:25.130587 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:25.137966 systemd-logind[1124]: New session 19 of user core. Apr 12 18:46:25.138374 systemd[1]: Started session-19.scope. Apr 12 18:46:25.452403 sshd[3547]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:25.456987 systemd[1]: sshd@20-10.128.0.15:22-139.178.89.65:42674.service: Deactivated successfully. Apr 12 18:46:25.458198 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:46:25.459162 systemd-logind[1124]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:46:25.460544 systemd-logind[1124]: Removed session 19. Apr 12 18:46:30.509232 systemd[1]: Started sshd@21-10.128.0.15:22-139.178.89.65:39354.service. Apr 12 18:46:30.856808 sshd[3559]: Accepted publickey for core from 139.178.89.65 port 39354 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:30.858736 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:30.866029 systemd-logind[1124]: New session 20 of user core. Apr 12 18:46:30.867035 systemd[1]: Started session-20.scope. Apr 12 18:46:31.179456 sshd[3559]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:31.184177 systemd[1]: sshd@21-10.128.0.15:22-139.178.89.65:39354.service: Deactivated successfully. Apr 12 18:46:31.185390 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:46:31.186580 systemd-logind[1124]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:46:31.187940 systemd-logind[1124]: Removed session 20. Apr 12 18:46:31.235351 systemd[1]: Started sshd@22-10.128.0.15:22-139.178.89.65:39360.service. Apr 12 18:46:31.582041 sshd[3571]: Accepted publickey for core from 139.178.89.65 port 39360 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:31.584343 sshd[3571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:31.592739 systemd[1]: Started session-21.scope. Apr 12 18:46:31.593420 systemd-logind[1124]: New session 21 of user core. Apr 12 18:46:33.232989 env[1143]: time="2024-04-12T18:46:33.232929856Z" level=info msg="StopContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" with timeout 30 (s)" Apr 12 18:46:33.234167 env[1143]: time="2024-04-12T18:46:33.234114157Z" level=info msg="Stop container \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" with signal terminated" Apr 12 18:46:33.266895 systemd[1]: run-containerd-runc-k8s.io-1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f-runc.qH0hh3.mount: Deactivated successfully. Apr 12 18:46:33.322557 env[1143]: time="2024-04-12T18:46:33.322470349Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:46:33.338568 systemd[1]: cri-containerd-f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e.scope: Deactivated successfully. Apr 12 18:46:33.343392 env[1143]: time="2024-04-12T18:46:33.343338040Z" level=info msg="StopContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" with timeout 1 (s)" Apr 12 18:46:33.343850 env[1143]: time="2024-04-12T18:46:33.343813266Z" level=info msg="Stop container \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" with signal terminated" Apr 12 18:46:33.365381 systemd-networkd[1023]: lxc_health: Link DOWN Apr 12 18:46:33.365391 systemd-networkd[1023]: lxc_health: Lost carrier Apr 12 18:46:33.418057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e-rootfs.mount: Deactivated successfully. Apr 12 18:46:33.431588 systemd[1]: cri-containerd-1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f.scope: Deactivated successfully. Apr 12 18:46:33.431972 systemd[1]: cri-containerd-1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f.scope: Consumed 9.853s CPU time. Apr 12 18:46:33.443191 env[1143]: time="2024-04-12T18:46:33.443124429Z" level=info msg="shim disconnected" id=f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e Apr 12 18:46:33.443191 env[1143]: time="2024-04-12T18:46:33.443190412Z" level=warning msg="cleaning up after shim disconnected" id=f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e namespace=k8s.io Apr 12 18:46:33.443581 env[1143]: time="2024-04-12T18:46:33.443204575Z" level=info msg="cleaning up dead shim" Apr 12 18:46:33.462811 env[1143]: time="2024-04-12T18:46:33.462744553Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3623 runtime=io.containerd.runc.v2\n" Apr 12 18:46:33.466516 env[1143]: time="2024-04-12T18:46:33.466467854Z" level=info msg="StopContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" returns successfully" Apr 12 18:46:33.472199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f-rootfs.mount: Deactivated successfully. Apr 12 18:46:33.473879 env[1143]: time="2024-04-12T18:46:33.473837511Z" level=info msg="StopPodSandbox for \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\"" Apr 12 18:46:33.474216 env[1143]: time="2024-04-12T18:46:33.474183078Z" level=info msg="Container to stop \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.480123 env[1143]: time="2024-04-12T18:46:33.480062674Z" level=info msg="shim disconnected" id=1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f Apr 12 18:46:33.480123 env[1143]: time="2024-04-12T18:46:33.480124183Z" level=warning msg="cleaning up after shim disconnected" id=1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f namespace=k8s.io Apr 12 18:46:33.480481 env[1143]: time="2024-04-12T18:46:33.480138330Z" level=info msg="cleaning up dead shim" Apr 12 18:46:33.487893 systemd[1]: cri-containerd-dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222.scope: Deactivated successfully. Apr 12 18:46:33.503067 env[1143]: time="2024-04-12T18:46:33.503002688Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3648 runtime=io.containerd.runc.v2\n" Apr 12 18:46:33.506121 env[1143]: time="2024-04-12T18:46:33.506063024Z" level=info msg="StopContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" returns successfully" Apr 12 18:46:33.506818 env[1143]: time="2024-04-12T18:46:33.506767485Z" level=info msg="StopPodSandbox for \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\"" Apr 12 18:46:33.506978 env[1143]: time="2024-04-12T18:46:33.506858461Z" level=info msg="Container to stop \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.506978 env[1143]: time="2024-04-12T18:46:33.506881630Z" level=info msg="Container to stop \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.506978 env[1143]: time="2024-04-12T18:46:33.506899682Z" level=info msg="Container to stop \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.506978 env[1143]: time="2024-04-12T18:46:33.506941300Z" level=info msg="Container to stop \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.506978 env[1143]: time="2024-04-12T18:46:33.506963946Z" level=info msg="Container to stop \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:33.520320 systemd[1]: cri-containerd-9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc.scope: Deactivated successfully. Apr 12 18:46:33.538131 env[1143]: time="2024-04-12T18:46:33.538058941Z" level=info msg="shim disconnected" id=dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222 Apr 12 18:46:33.538665 env[1143]: time="2024-04-12T18:46:33.538617786Z" level=warning msg="cleaning up after shim disconnected" id=dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222 namespace=k8s.io Apr 12 18:46:33.538850 env[1143]: time="2024-04-12T18:46:33.538823149Z" level=info msg="cleaning up dead shim" Apr 12 18:46:33.560817 env[1143]: time="2024-04-12T18:46:33.560750667Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3692 runtime=io.containerd.runc.v2\n" Apr 12 18:46:33.561595 env[1143]: time="2024-04-12T18:46:33.561437305Z" level=info msg="TearDown network for sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" successfully" Apr 12 18:46:33.561819 env[1143]: time="2024-04-12T18:46:33.561671640Z" level=info msg="StopPodSandbox for \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" returns successfully" Apr 12 18:46:33.567160 env[1143]: time="2024-04-12T18:46:33.567099875Z" level=info msg="shim disconnected" id=9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc Apr 12 18:46:33.567430 env[1143]: time="2024-04-12T18:46:33.567376662Z" level=warning msg="cleaning up after shim disconnected" id=9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc namespace=k8s.io Apr 12 18:46:33.567582 env[1143]: time="2024-04-12T18:46:33.567557607Z" level=info msg="cleaning up dead shim" Apr 12 18:46:33.583921 env[1143]: time="2024-04-12T18:46:33.583841330Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Apr 12 18:46:33.585571 env[1143]: time="2024-04-12T18:46:33.584456109Z" level=info msg="TearDown network for sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" successfully" Apr 12 18:46:33.585571 env[1143]: time="2024-04-12T18:46:33.584496711Z" level=info msg="StopPodSandbox for \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" returns successfully" Apr 12 18:46:33.594979 kubelet[2038]: I0412 18:46:33.594950 2038 scope.go:115] "RemoveContainer" containerID="f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e" Apr 12 18:46:33.599618 env[1143]: time="2024-04-12T18:46:33.599438412Z" level=info msg="RemoveContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\"" Apr 12 18:46:33.611938 kubelet[2038]: I0412 18:46:33.610607 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkxxh\" (UniqueName: \"kubernetes.io/projected/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-kube-api-access-qkxxh\") pod \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\" (UID: \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\") " Apr 12 18:46:33.611938 kubelet[2038]: I0412 18:46:33.610680 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path\") pod \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\" (UID: \"1a09c3dc-0386-43a8-81b8-b82ea89ef32b\") " Apr 12 18:46:33.611938 kubelet[2038]: W0412 18:46:33.611010 2038 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/1a09c3dc-0386-43a8-81b8-b82ea89ef32b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:46:33.615949 kubelet[2038]: I0412 18:46:33.615022 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1a09c3dc-0386-43a8-81b8-b82ea89ef32b" (UID: "1a09c3dc-0386-43a8-81b8-b82ea89ef32b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:46:33.616138 env[1143]: time="2024-04-12T18:46:33.615935251Z" level=info msg="RemoveContainer for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" returns successfully" Apr 12 18:46:33.616390 kubelet[2038]: I0412 18:46:33.616362 2038 scope.go:115] "RemoveContainer" containerID="f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e" Apr 12 18:46:33.616871 env[1143]: time="2024-04-12T18:46:33.616732331Z" level=error msg="ContainerStatus for \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\": not found" Apr 12 18:46:33.617414 kubelet[2038]: E0412 18:46:33.617375 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\": not found" containerID="f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e" Apr 12 18:46:33.617704 kubelet[2038]: I0412 18:46:33.617664 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e} err="failed to get container status \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f78069da04f436e488d285d8fffcc53a8fef38a40d755d838243516513e10a9e\": not found" Apr 12 18:46:33.617879 kubelet[2038]: I0412 18:46:33.617862 2038 scope.go:115] "RemoveContainer" containerID="1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f" Apr 12 18:46:33.620196 env[1143]: time="2024-04-12T18:46:33.620138133Z" level=info msg="RemoveContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\"" Apr 12 18:46:33.626879 env[1143]: time="2024-04-12T18:46:33.626805722Z" level=info msg="RemoveContainer for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" returns successfully" Apr 12 18:46:33.628348 kubelet[2038]: I0412 18:46:33.628303 2038 scope.go:115] "RemoveContainer" containerID="725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02" Apr 12 18:46:33.628520 kubelet[2038]: I0412 18:46:33.628422 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-kube-api-access-qkxxh" (OuterVolumeSpecName: "kube-api-access-qkxxh") pod "1a09c3dc-0386-43a8-81b8-b82ea89ef32b" (UID: "1a09c3dc-0386-43a8-81b8-b82ea89ef32b"). InnerVolumeSpecName "kube-api-access-qkxxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:46:33.630086 env[1143]: time="2024-04-12T18:46:33.630037214Z" level=info msg="RemoveContainer for \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\"" Apr 12 18:46:33.634684 env[1143]: time="2024-04-12T18:46:33.634626777Z" level=info msg="RemoveContainer for \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\" returns successfully" Apr 12 18:46:33.634926 kubelet[2038]: I0412 18:46:33.634877 2038 scope.go:115] "RemoveContainer" containerID="7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11" Apr 12 18:46:33.636374 env[1143]: time="2024-04-12T18:46:33.636321929Z" level=info msg="RemoveContainer for \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\"" Apr 12 18:46:33.641037 env[1143]: time="2024-04-12T18:46:33.640972464Z" level=info msg="RemoveContainer for \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\" returns successfully" Apr 12 18:46:33.641274 kubelet[2038]: I0412 18:46:33.641222 2038 scope.go:115] "RemoveContainer" containerID="841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1" Apr 12 18:46:33.642834 env[1143]: time="2024-04-12T18:46:33.642791033Z" level=info msg="RemoveContainer for \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\"" Apr 12 18:46:33.654860 env[1143]: time="2024-04-12T18:46:33.654781270Z" level=info msg="RemoveContainer for \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\" returns successfully" Apr 12 18:46:33.655186 kubelet[2038]: I0412 18:46:33.655131 2038 scope.go:115] "RemoveContainer" containerID="e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af" Apr 12 18:46:33.657659 env[1143]: time="2024-04-12T18:46:33.657602833Z" level=info msg="RemoveContainer for \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\"" Apr 12 18:46:33.662083 env[1143]: time="2024-04-12T18:46:33.662027926Z" level=info msg="RemoveContainer for \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\" returns successfully" Apr 12 18:46:33.662281 kubelet[2038]: I0412 18:46:33.662257 2038 scope.go:115] "RemoveContainer" containerID="1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f" Apr 12 18:46:33.662795 env[1143]: time="2024-04-12T18:46:33.662665258Z" level=error msg="ContainerStatus for \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\": not found" Apr 12 18:46:33.663170 kubelet[2038]: E0412 18:46:33.663125 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\": not found" containerID="1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f" Apr 12 18:46:33.663289 kubelet[2038]: I0412 18:46:33.663176 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f} err="failed to get container status \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\": rpc error: code = NotFound desc = an error occurred when try to find container \"1553b720da865bb4121064486a1a8c8f7fb77cf7cae10fb4420b9207ae82655f\": not found" Apr 12 18:46:33.663289 kubelet[2038]: I0412 18:46:33.663199 2038 scope.go:115] "RemoveContainer" containerID="725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02" Apr 12 18:46:33.663660 env[1143]: time="2024-04-12T18:46:33.663580013Z" level=error msg="ContainerStatus for \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\": not found" Apr 12 18:46:33.663996 kubelet[2038]: E0412 18:46:33.663970 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\": not found" containerID="725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02" Apr 12 18:46:33.664116 kubelet[2038]: I0412 18:46:33.664017 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02} err="failed to get container status \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\": rpc error: code = NotFound desc = an error occurred when try to find container \"725a01f3a94330b3f12e183d154f2d617296f65f409372a9d41351ce06782a02\": not found" Apr 12 18:46:33.664116 kubelet[2038]: I0412 18:46:33.664033 2038 scope.go:115] "RemoveContainer" containerID="7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11" Apr 12 18:46:33.664545 env[1143]: time="2024-04-12T18:46:33.664474315Z" level=error msg="ContainerStatus for \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\": not found" Apr 12 18:46:33.664722 kubelet[2038]: E0412 18:46:33.664663 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\": not found" containerID="7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11" Apr 12 18:46:33.664722 kubelet[2038]: I0412 18:46:33.664699 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11} err="failed to get container status \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c4b4f5379bb7efb91a4a0723afddcdd794494a241f5d9d081632aa62ee41e11\": not found" Apr 12 18:46:33.664722 kubelet[2038]: I0412 18:46:33.664714 2038 scope.go:115] "RemoveContainer" containerID="841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1" Apr 12 18:46:33.665172 env[1143]: time="2024-04-12T18:46:33.665091446Z" level=error msg="ContainerStatus for \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\": not found" Apr 12 18:46:33.665439 kubelet[2038]: E0412 18:46:33.665418 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\": not found" containerID="841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1" Apr 12 18:46:33.665546 kubelet[2038]: I0412 18:46:33.665459 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1} err="failed to get container status \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"841779c27005550e6d4ce95125df3584a0f2411cfd5b7e778c3558c2f24bc8d1\": not found" Apr 12 18:46:33.665546 kubelet[2038]: I0412 18:46:33.665475 2038 scope.go:115] "RemoveContainer" containerID="e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af" Apr 12 18:46:33.665779 env[1143]: time="2024-04-12T18:46:33.665699260Z" level=error msg="ContainerStatus for \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\": not found" Apr 12 18:46:33.666049 kubelet[2038]: E0412 18:46:33.666029 2038 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\": not found" containerID="e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af" Apr 12 18:46:33.666243 kubelet[2038]: I0412 18:46:33.666203 2038 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af} err="failed to get container status \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9371cd403728925ece542725f8ba8b5cbac4d57f69343ccc1ba11677896e9af\": not found" Apr 12 18:46:33.711581 kubelet[2038]: I0412 18:46:33.711509 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-kernel\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.711581 kubelet[2038]: I0412 18:46:33.711588 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-run\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711651 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711690 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f82xr\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-kube-api-access-f82xr\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711738 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hostproc\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711776 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711824 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-bpf-maps\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712088 kubelet[2038]: I0412 18:46:33.711855 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-cgroup\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.711938 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.711990 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-xtables-lock\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.712021 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-net\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.712074 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cni-path\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.712104 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-lib-modules\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712446 kubelet[2038]: I0412 18:46:33.712165 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-etc-cni-netd\") pod \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\" (UID: \"b98aae52-9598-4a7b-b7b5-ea860ea0f989\") " Apr 12 18:46:33.712785 kubelet[2038]: I0412 18:46:33.712245 2038 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qkxxh\" (UniqueName: \"kubernetes.io/projected/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-kube-api-access-qkxxh\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.712785 kubelet[2038]: I0412 18:46:33.712270 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1a09c3dc-0386-43a8-81b8-b82ea89ef32b-cilium-config-path\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.712785 kubelet[2038]: I0412 18:46:33.712338 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.712785 kubelet[2038]: I0412 18:46:33.712418 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.712785 kubelet[2038]: I0412 18:46:33.712449 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.714547 kubelet[2038]: W0412 18:46:33.714421 2038 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b98aae52-9598-4a7b-b7b5-ea860ea0f989/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:46:33.715679 kubelet[2038]: I0412 18:46:33.714893 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hostproc" (OuterVolumeSpecName: "hostproc") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.715820 kubelet[2038]: I0412 18:46:33.715611 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.715820 kubelet[2038]: I0412 18:46:33.715638 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.717276 kubelet[2038]: I0412 18:46:33.716770 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.717276 kubelet[2038]: I0412 18:46:33.716862 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.717276 kubelet[2038]: I0412 18:46:33.716950 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cni-path" (OuterVolumeSpecName: "cni-path") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.717276 kubelet[2038]: I0412 18:46:33.717021 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:33.720187 kubelet[2038]: I0412 18:46:33.720148 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:46:33.721868 kubelet[2038]: I0412 18:46:33.721815 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:46:33.727012 kubelet[2038]: I0412 18:46:33.726938 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:46:33.727649 kubelet[2038]: I0412 18:46:33.727597 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-kube-api-access-f82xr" (OuterVolumeSpecName: "kube-api-access-f82xr") pod "b98aae52-9598-4a7b-b7b5-ea860ea0f989" (UID: "b98aae52-9598-4a7b-b7b5-ea860ea0f989"). InnerVolumeSpecName "kube-api-access-f82xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813070 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-config-path\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813121 2038 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-xtables-lock\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813140 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-net\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813158 2038 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cni-path\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813176 2038 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-lib-modules\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813194 2038 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-etc-cni-netd\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813214 kubelet[2038]: I0412 18:46:33.813212 2038 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hubble-tls\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813231 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-host-proc-sys-kernel\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813248 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-run\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813265 2038 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b98aae52-9598-4a7b-b7b5-ea860ea0f989-clustermesh-secrets\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813284 2038 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-f82xr\" (UniqueName: \"kubernetes.io/projected/b98aae52-9598-4a7b-b7b5-ea860ea0f989-kube-api-access-f82xr\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813301 2038 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-hostproc\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813335 2038 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-bpf-maps\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.813732 kubelet[2038]: I0412 18:46:33.813353 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b98aae52-9598-4a7b-b7b5-ea860ea0f989-cilium-cgroup\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:33.900292 systemd[1]: Removed slice kubepods-besteffort-pod1a09c3dc_0386_43a8_81b8_b82ea89ef32b.slice. Apr 12 18:46:34.167797 kubelet[2038]: I0412 18:46:34.167726 2038 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=1a09c3dc-0386-43a8-81b8-b82ea89ef32b path="/var/lib/kubelet/pods/1a09c3dc-0386-43a8-81b8-b82ea89ef32b/volumes" Apr 12 18:46:34.173180 systemd[1]: Removed slice kubepods-burstable-podb98aae52_9598_4a7b_b7b5_ea860ea0f989.slice. Apr 12 18:46:34.173348 systemd[1]: kubepods-burstable-podb98aae52_9598_4a7b_b7b5_ea860ea0f989.slice: Consumed 10.026s CPU time. Apr 12 18:46:34.246542 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222-rootfs.mount: Deactivated successfully. Apr 12 18:46:34.246693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222-shm.mount: Deactivated successfully. Apr 12 18:46:34.246797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc-rootfs.mount: Deactivated successfully. Apr 12 18:46:34.246890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc-shm.mount: Deactivated successfully. Apr 12 18:46:34.247003 systemd[1]: var-lib-kubelet-pods-b98aae52\x2d9598\x2d4a7b\x2db7b5\x2dea860ea0f989-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:46:34.247095 systemd[1]: var-lib-kubelet-pods-b98aae52\x2d9598\x2d4a7b\x2db7b5\x2dea860ea0f989-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:46:34.247190 systemd[1]: var-lib-kubelet-pods-1a09c3dc\x2d0386\x2d43a8\x2d81b8\x2db82ea89ef32b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqkxxh.mount: Deactivated successfully. Apr 12 18:46:34.247287 systemd[1]: var-lib-kubelet-pods-b98aae52\x2d9598\x2d4a7b\x2db7b5\x2dea860ea0f989-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df82xr.mount: Deactivated successfully. Apr 12 18:46:35.194021 sshd[3571]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:35.199002 systemd-logind[1124]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:46:35.199268 systemd[1]: sshd@22-10.128.0.15:22-139.178.89.65:39360.service: Deactivated successfully. Apr 12 18:46:35.200441 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:46:35.202091 systemd-logind[1124]: Removed session 21. Apr 12 18:46:35.250382 systemd[1]: Started sshd@23-10.128.0.15:22-139.178.89.65:39372.service. Apr 12 18:46:35.596358 sshd[3731]: Accepted publickey for core from 139.178.89.65 port 39372 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:35.598890 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:35.608276 systemd-logind[1124]: New session 22 of user core. Apr 12 18:46:35.609037 systemd[1]: Started session-22.scope. Apr 12 18:46:36.168729 kubelet[2038]: I0412 18:46:36.168696 2038 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b98aae52-9598-4a7b-b7b5-ea860ea0f989 path="/var/lib/kubelet/pods/b98aae52-9598-4a7b-b7b5-ea860ea0f989/volumes" Apr 12 18:46:36.193334 env[1143]: time="2024-04-12T18:46:36.193022000Z" level=info msg="StopPodSandbox for \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\"" Apr 12 18:46:36.193334 env[1143]: time="2024-04-12T18:46:36.193190657Z" level=info msg="TearDown network for sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" successfully" Apr 12 18:46:36.193334 env[1143]: time="2024-04-12T18:46:36.193254322Z" level=info msg="StopPodSandbox for \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" returns successfully" Apr 12 18:46:36.196149 env[1143]: time="2024-04-12T18:46:36.194527170Z" level=info msg="RemovePodSandbox for \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\"" Apr 12 18:46:36.196149 env[1143]: time="2024-04-12T18:46:36.194568524Z" level=info msg="Forcibly stopping sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\"" Apr 12 18:46:36.196149 env[1143]: time="2024-04-12T18:46:36.194708798Z" level=info msg="TearDown network for sandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" successfully" Apr 12 18:46:36.201084 env[1143]: time="2024-04-12T18:46:36.201000952Z" level=info msg="RemovePodSandbox \"9e8f21314f87f04eabe82fedf255894c5e3d42a276355a983ae38834574873bc\" returns successfully" Apr 12 18:46:36.201841 env[1143]: time="2024-04-12T18:46:36.201802590Z" level=info msg="StopPodSandbox for \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\"" Apr 12 18:46:36.202164 env[1143]: time="2024-04-12T18:46:36.202108732Z" level=info msg="TearDown network for sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" successfully" Apr 12 18:46:36.202316 env[1143]: time="2024-04-12T18:46:36.202273260Z" level=info msg="StopPodSandbox for \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" returns successfully" Apr 12 18:46:36.202779 env[1143]: time="2024-04-12T18:46:36.202747708Z" level=info msg="RemovePodSandbox for \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\"" Apr 12 18:46:36.202971 env[1143]: time="2024-04-12T18:46:36.202898455Z" level=info msg="Forcibly stopping sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\"" Apr 12 18:46:36.203258 env[1143]: time="2024-04-12T18:46:36.203227358Z" level=info msg="TearDown network for sandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" successfully" Apr 12 18:46:36.208689 env[1143]: time="2024-04-12T18:46:36.208645853Z" level=info msg="RemovePodSandbox \"dcd452c8be60878894d827bd9e26f6d4854b5b358c0b2a658b2e6cb29a507222\" returns successfully" Apr 12 18:46:36.376374 kubelet[2038]: E0412 18:46:36.376345 2038 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:46:36.693197 kubelet[2038]: I0412 18:46:36.693138 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:36.693607 kubelet[2038]: E0412 18:46:36.693582 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="mount-cgroup" Apr 12 18:46:36.693808 kubelet[2038]: E0412 18:46:36.693789 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="mount-bpf-fs" Apr 12 18:46:36.693973 kubelet[2038]: E0412 18:46:36.693957 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="apply-sysctl-overwrites" Apr 12 18:46:36.694109 kubelet[2038]: E0412 18:46:36.694088 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1a09c3dc-0386-43a8-81b8-b82ea89ef32b" containerName="cilium-operator" Apr 12 18:46:36.694275 kubelet[2038]: E0412 18:46:36.694256 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="clean-cilium-state" Apr 12 18:46:36.694428 kubelet[2038]: E0412 18:46:36.694412 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="cilium-agent" Apr 12 18:46:36.694630 kubelet[2038]: I0412 18:46:36.694602 2038 memory_manager.go:346] "RemoveStaleState removing state" podUID="b98aae52-9598-4a7b-b7b5-ea860ea0f989" containerName="cilium-agent" Apr 12 18:46:36.694764 kubelet[2038]: I0412 18:46:36.694748 2038 memory_manager.go:346] "RemoveStaleState removing state" podUID="1a09c3dc-0386-43a8-81b8-b82ea89ef32b" containerName="cilium-operator" Apr 12 18:46:36.706235 sshd[3731]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:36.709003 systemd[1]: Created slice kubepods-burstable-podea3db601_8d28_40a2_9c98_cbcacc32867d.slice. Apr 12 18:46:36.714703 systemd[1]: sshd@23-10.128.0.15:22-139.178.89.65:39372.service: Deactivated successfully. Apr 12 18:46:36.716128 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:46:36.719539 systemd-logind[1124]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:46:36.723756 systemd-logind[1124]: Removed session 22. Apr 12 18:46:36.737428 kubelet[2038]: I0412 18:46:36.735638 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.737807 kubelet[2038]: I0412 18:46:36.737779 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dr2z\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-kube-api-access-8dr2z\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.738031 kubelet[2038]: I0412 18:46:36.738010 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.738196 kubelet[2038]: I0412 18:46:36.738176 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-run\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.738338 kubelet[2038]: I0412 18:46:36.738322 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cni-path\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.738469 kubelet[2038]: I0412 18:46:36.738449 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-kernel\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.738645 kubelet[2038]: I0412 18:46:36.738627 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.740041 kubelet[2038]: I0412 18:46:36.740010 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.740337 kubelet[2038]: I0412 18:46:36.740315 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-net\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.740562 kubelet[2038]: I0412 18:46:36.740543 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-hostproc\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.741003 kubelet[2038]: I0412 18:46:36.740977 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-cgroup\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.741188 kubelet[2038]: I0412 18:46:36.741170 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-etc-cni-netd\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.741367 kubelet[2038]: I0412 18:46:36.741349 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-bpf-maps\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.741510 kubelet[2038]: I0412 18:46:36.741491 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-lib-modules\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.744042 kubelet[2038]: I0412 18:46:36.744008 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-xtables-lock\") pod \"cilium-dql9x\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " pod="kube-system/cilium-dql9x" Apr 12 18:46:36.747294 kubelet[2038]: W0412 18:46:36.747258 2038 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.747603 kubelet[2038]: E0412 18:46:36.747577 2038 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.747867 kubelet[2038]: W0412 18:46:36.747838 2038 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.748048 kubelet[2038]: E0412 18:46:36.748027 2038 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.748248 kubelet[2038]: W0412 18:46:36.748225 2038 reflector.go:533] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.748488 kubelet[2038]: E0412 18:46:36.748454 2038 reflector.go:148] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.748715 kubelet[2038]: W0412 18:46:36.748688 2038 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.748858 kubelet[2038]: E0412 18:46:36.748838 2038 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal' and this object Apr 12 18:46:36.761625 systemd[1]: Started sshd@24-10.128.0.15:22-139.178.89.65:39388.service. Apr 12 18:46:37.121170 sshd[3743]: Accepted publickey for core from 139.178.89.65 port 39388 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:37.123223 sshd[3743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:37.130746 systemd[1]: Started session-23.scope. Apr 12 18:46:37.131384 systemd-logind[1124]: New session 23 of user core. Apr 12 18:46:37.469399 sshd[3743]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:37.474164 systemd[1]: sshd@24-10.128.0.15:22-139.178.89.65:39388.service: Deactivated successfully. Apr 12 18:46:37.477208 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:46:37.478301 systemd-logind[1124]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:46:37.479693 systemd-logind[1124]: Removed session 23. Apr 12 18:46:37.526326 systemd[1]: Started sshd@25-10.128.0.15:22-139.178.89.65:39896.service. Apr 12 18:46:37.846149 kubelet[2038]: E0412 18:46:37.846097 2038 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.846725 kubelet[2038]: E0412 18:46:37.846180 2038 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-dql9x: failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.846725 kubelet[2038]: E0412 18:46:37.846272 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls podName:ea3db601-8d28-40a2-9c98-cbcacc32867d nodeName:}" failed. No retries permitted until 2024-04-12 18:46:38.346246061 +0000 UTC m=+122.489636482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls") pod "cilium-dql9x" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.847372 kubelet[2038]: E0412 18:46:37.846100 2038 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.847372 kubelet[2038]: E0412 18:46:37.847147 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets podName:ea3db601-8d28-40a2-9c98-cbcacc32867d nodeName:}" failed. No retries permitted until 2024-04-12 18:46:38.347119112 +0000 UTC m=+122.490509534 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets") pod "cilium-dql9x" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.847372 kubelet[2038]: E0412 18:46:37.846121 2038 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:46:37.847372 kubelet[2038]: E0412 18:46:37.847217 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path podName:ea3db601-8d28-40a2-9c98-cbcacc32867d nodeName:}" failed. No retries permitted until 2024-04-12 18:46:38.347204304 +0000 UTC m=+122.490594723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path") pod "cilium-dql9x" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:46:37.847372 kubelet[2038]: E0412 18:46:37.846133 2038 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.847817 kubelet[2038]: E0412 18:46:37.847261 2038 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets podName:ea3db601-8d28-40a2-9c98-cbcacc32867d nodeName:}" failed. No retries permitted until 2024-04-12 18:46:38.347249088 +0000 UTC m=+122.490639501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets") pod "cilium-dql9x" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d") : failed to sync secret cache: timed out waiting for the condition Apr 12 18:46:37.870735 sshd[3755]: Accepted publickey for core from 139.178.89.65 port 39896 ssh2: RSA SHA256:XKFMBFadJXSMDH/AGu0Wp1o8GIJqXXW9iA48JbQvGEs Apr 12 18:46:37.872743 sshd[3755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:46:37.879897 systemd[1]: Started session-24.scope. Apr 12 18:46:37.880540 systemd-logind[1124]: New session 24 of user core. Apr 12 18:46:38.551829 env[1143]: time="2024-04-12T18:46:38.551755795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dql9x,Uid:ea3db601-8d28-40a2-9c98-cbcacc32867d,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:38.580505 env[1143]: time="2024-04-12T18:46:38.580242527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:38.580505 env[1143]: time="2024-04-12T18:46:38.580292009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:38.580505 env[1143]: time="2024-04-12T18:46:38.580304426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:38.581248 env[1143]: time="2024-04-12T18:46:38.581182796Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97 pid=3774 runtime=io.containerd.runc.v2 Apr 12 18:46:38.611559 systemd[1]: Started cri-containerd-a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97.scope. Apr 12 18:46:38.655849 env[1143]: time="2024-04-12T18:46:38.655288200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dql9x,Uid:ea3db601-8d28-40a2-9c98-cbcacc32867d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\"" Apr 12 18:46:38.661976 env[1143]: time="2024-04-12T18:46:38.661896047Z" level=info msg="CreateContainer within sandbox \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:46:38.681278 env[1143]: time="2024-04-12T18:46:38.681208603Z" level=info msg="CreateContainer within sandbox \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\"" Apr 12 18:46:38.683245 env[1143]: time="2024-04-12T18:46:38.683199453Z" level=info msg="StartContainer for \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\"" Apr 12 18:46:38.707232 systemd[1]: Started cri-containerd-8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d.scope. Apr 12 18:46:38.735041 systemd[1]: cri-containerd-8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d.scope: Deactivated successfully. Apr 12 18:46:38.763112 env[1143]: time="2024-04-12T18:46:38.763038546Z" level=info msg="shim disconnected" id=8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d Apr 12 18:46:38.763112 env[1143]: time="2024-04-12T18:46:38.763112477Z" level=warning msg="cleaning up after shim disconnected" id=8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d namespace=k8s.io Apr 12 18:46:38.763112 env[1143]: time="2024-04-12T18:46:38.763129150Z" level=info msg="cleaning up dead shim" Apr 12 18:46:38.775392 env[1143]: time="2024-04-12T18:46:38.775299466Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3836 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:46:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:46:38.775848 env[1143]: time="2024-04-12T18:46:38.775699741Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Apr 12 18:46:38.776199 env[1143]: time="2024-04-12T18:46:38.776143035Z" level=error msg="Failed to pipe stderr of container \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\"" error="reading from a closed fifo" Apr 12 18:46:38.776387 env[1143]: time="2024-04-12T18:46:38.776209442Z" level=error msg="Failed to pipe stdout of container \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\"" error="reading from a closed fifo" Apr 12 18:46:38.779117 env[1143]: time="2024-04-12T18:46:38.779023631Z" level=error msg="StartContainer for \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:46:38.779704 kubelet[2038]: E0412 18:46:38.779403 2038 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d" Apr 12 18:46:38.779704 kubelet[2038]: E0412 18:46:38.779568 2038 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:46:38.779704 kubelet[2038]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:46:38.779704 kubelet[2038]: rm /hostbin/cilium-mount Apr 12 18:46:38.781959 kubelet[2038]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8dr2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-dql9x_kube-system(ea3db601-8d28-40a2-9c98-cbcacc32867d): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:46:38.782115 kubelet[2038]: E0412 18:46:38.779627 2038 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dql9x" podUID=ea3db601-8d28-40a2-9c98-cbcacc32867d Apr 12 18:46:38.866569 kubelet[2038]: I0412 18:46:38.866539 2038 setters.go:548] "Node became not ready" node="ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal" condition={Type:Ready Status:False LastHeartbeatTime:2024-04-12 18:46:38.866449566 +0000 UTC m=+123.009839980 LastTransitionTime:2024-04-12 18:46:38.866449566 +0000 UTC m=+123.009839980 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Apr 12 18:46:39.366979 systemd[1]: run-containerd-runc-k8s.io-a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97-runc.g3yLLe.mount: Deactivated successfully. Apr 12 18:46:39.632363 env[1143]: time="2024-04-12T18:46:39.631954034Z" level=info msg="StopPodSandbox for \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\"" Apr 12 18:46:39.633125 env[1143]: time="2024-04-12T18:46:39.633090234Z" level=info msg="Container to stop \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:46:39.636197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97-shm.mount: Deactivated successfully. Apr 12 18:46:39.647020 systemd[1]: cri-containerd-a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97.scope: Deactivated successfully. Apr 12 18:46:39.702353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97-rootfs.mount: Deactivated successfully. Apr 12 18:46:39.710367 env[1143]: time="2024-04-12T18:46:39.710306951Z" level=info msg="shim disconnected" id=a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97 Apr 12 18:46:39.710786 env[1143]: time="2024-04-12T18:46:39.710762018Z" level=warning msg="cleaning up after shim disconnected" id=a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97 namespace=k8s.io Apr 12 18:46:39.711097 env[1143]: time="2024-04-12T18:46:39.711071697Z" level=info msg="cleaning up dead shim" Apr 12 18:46:39.725430 env[1143]: time="2024-04-12T18:46:39.725355091Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3865 runtime=io.containerd.runc.v2\n" Apr 12 18:46:39.725946 env[1143]: time="2024-04-12T18:46:39.725818863Z" level=info msg="TearDown network for sandbox \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\" successfully" Apr 12 18:46:39.726109 env[1143]: time="2024-04-12T18:46:39.725957652Z" level=info msg="StopPodSandbox for \"a3ab27e05ac72706c699bea058962479ad76e4a572e7031df1069a6ec68b4e97\" returns successfully" Apr 12 18:46:39.769842 kubelet[2038]: I0412 18:46:39.769751 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.769842 kubelet[2038]: I0412 18:46:39.769811 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-run\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.769842 kubelet[2038]: I0412 18:46:39.769841 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-xtables-lock\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.769881 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-etc-cni-netd\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.769928 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-bpf-maps\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.769965 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.770002 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.770038 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cni-path\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770297 kubelet[2038]: I0412 18:46:39.770070 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-kernel\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770099 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-cgroup\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770136 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770233 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-lib-modules\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770288 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dr2z\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-kube-api-access-8dr2z\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770320 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-hostproc\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770630 kubelet[2038]: I0412 18:46:39.770353 2038 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-net\") pod \"ea3db601-8d28-40a2-9c98-cbcacc32867d\" (UID: \"ea3db601-8d28-40a2-9c98-cbcacc32867d\") " Apr 12 18:46:39.770993 kubelet[2038]: I0412 18:46:39.770450 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.772930 kubelet[2038]: I0412 18:46:39.771107 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cni-path" (OuterVolumeSpecName: "cni-path") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.772930 kubelet[2038]: I0412 18:46:39.771159 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.772930 kubelet[2038]: I0412 18:46:39.771186 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.772930 kubelet[2038]: I0412 18:46:39.771212 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.772930 kubelet[2038]: I0412 18:46:39.771237 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.773310 kubelet[2038]: W0412 18:46:39.771447 2038 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/ea3db601-8d28-40a2-9c98-cbcacc32867d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Apr 12 18:46:39.777382 systemd[1]: var-lib-kubelet-pods-ea3db601\x2d8d28\x2d40a2\x2d9c98\x2dcbcacc32867d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:46:39.780952 kubelet[2038]: I0412 18:46:39.778352 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:46:39.780952 kubelet[2038]: I0412 18:46:39.778420 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.780952 kubelet[2038]: I0412 18:46:39.778454 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.781474 kubelet[2038]: I0412 18:46:39.781399 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.782477 kubelet[2038]: I0412 18:46:39.782438 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:46:39.782615 kubelet[2038]: I0412 18:46:39.782510 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-hostproc" (OuterVolumeSpecName: "hostproc") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:46:39.786940 systemd[1]: var-lib-kubelet-pods-ea3db601\x2d8d28\x2d40a2\x2d9c98\x2dcbcacc32867d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:46:39.788122 kubelet[2038]: I0412 18:46:39.787747 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:46:39.792136 kubelet[2038]: I0412 18:46:39.792075 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:46:39.792277 kubelet[2038]: I0412 18:46:39.792161 2038 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-kube-api-access-8dr2z" (OuterVolumeSpecName: "kube-api-access-8dr2z") pod "ea3db601-8d28-40a2-9c98-cbcacc32867d" (UID: "ea3db601-8d28-40a2-9c98-cbcacc32867d"). InnerVolumeSpecName "kube-api-access-8dr2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:46:39.870699 kubelet[2038]: I0412 18:46:39.870624 2038 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-clustermesh-secrets\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.870699 kubelet[2038]: I0412 18:46:39.870687 2038 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-lib-modules\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.870699 kubelet[2038]: I0412 18:46:39.870710 2038 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8dr2z\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-kube-api-access-8dr2z\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870728 2038 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-hostproc\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870748 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-net\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870765 2038 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-xtables-lock\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870783 2038 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ea3db601-8d28-40a2-9c98-cbcacc32867d-hubble-tls\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870800 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-run\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870819 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-config-path\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871359 kubelet[2038]: I0412 18:46:39.870840 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-ipsec-secrets\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871636 kubelet[2038]: I0412 18:46:39.870858 2038 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-etc-cni-netd\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871636 kubelet[2038]: I0412 18:46:39.870880 2038 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-bpf-maps\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871636 kubelet[2038]: I0412 18:46:39.870938 2038 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cni-path\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871636 kubelet[2038]: I0412 18:46:39.870961 2038 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-host-proc-sys-kernel\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:39.871636 kubelet[2038]: I0412 18:46:39.870983 2038 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ea3db601-8d28-40a2-9c98-cbcacc32867d-cilium-cgroup\") on node \"ci-3510-3-3-5ab81259ac89653c3ce9.c.flatcar-212911.internal\" DevicePath \"\"" Apr 12 18:46:40.173100 systemd[1]: Removed slice kubepods-burstable-podea3db601_8d28_40a2_9c98_cbcacc32867d.slice. Apr 12 18:46:40.367163 systemd[1]: var-lib-kubelet-pods-ea3db601\x2d8d28\x2d40a2\x2d9c98\x2dcbcacc32867d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:46:40.367342 systemd[1]: var-lib-kubelet-pods-ea3db601\x2d8d28\x2d40a2\x2d9c98\x2dcbcacc32867d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8dr2z.mount: Deactivated successfully. Apr 12 18:46:40.636010 kubelet[2038]: I0412 18:46:40.635974 2038 scope.go:115] "RemoveContainer" containerID="8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d" Apr 12 18:46:40.638605 env[1143]: time="2024-04-12T18:46:40.638541404Z" level=info msg="RemoveContainer for \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\"" Apr 12 18:46:40.646063 env[1143]: time="2024-04-12T18:46:40.646005890Z" level=info msg="RemoveContainer for \"8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d\" returns successfully" Apr 12 18:46:40.707306 kubelet[2038]: I0412 18:46:40.707254 2038 topology_manager.go:212] "Topology Admit Handler" Apr 12 18:46:40.707549 kubelet[2038]: E0412 18:46:40.707360 2038 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea3db601-8d28-40a2-9c98-cbcacc32867d" containerName="mount-cgroup" Apr 12 18:46:40.707549 kubelet[2038]: I0412 18:46:40.707396 2038 memory_manager.go:346] "RemoveStaleState removing state" podUID="ea3db601-8d28-40a2-9c98-cbcacc32867d" containerName="mount-cgroup" Apr 12 18:46:40.719466 systemd[1]: Created slice kubepods-burstable-pod9bc2a25e_e50d_4d58_8638_989437096c4d.slice. Apr 12 18:46:40.777871 kubelet[2038]: I0412 18:46:40.777821 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9bc2a25e-e50d-4d58-8638-989437096c4d-hubble-tls\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778139 kubelet[2038]: I0412 18:46:40.777965 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-bpf-maps\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778139 kubelet[2038]: I0412 18:46:40.778080 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-hostproc\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778295 kubelet[2038]: I0412 18:46:40.778156 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qh5\" (UniqueName: \"kubernetes.io/projected/9bc2a25e-e50d-4d58-8638-989437096c4d-kube-api-access-j2qh5\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778295 kubelet[2038]: I0412 18:46:40.778252 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9bc2a25e-e50d-4d58-8638-989437096c4d-clustermesh-secrets\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778417 kubelet[2038]: I0412 18:46:40.778331 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-cilium-run\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778417 kubelet[2038]: I0412 18:46:40.778403 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-lib-modules\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778542 kubelet[2038]: I0412 18:46:40.778440 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-host-proc-sys-kernel\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778542 kubelet[2038]: I0412 18:46:40.778509 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-cilium-cgroup\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778677 kubelet[2038]: I0412 18:46:40.778616 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-xtables-lock\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778772 kubelet[2038]: I0412 18:46:40.778718 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-cni-path\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778860 kubelet[2038]: I0412 18:46:40.778790 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9bc2a25e-e50d-4d58-8638-989437096c4d-cilium-ipsec-secrets\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.778959 kubelet[2038]: I0412 18:46:40.778873 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-etc-cni-netd\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.779025 kubelet[2038]: I0412 18:46:40.778960 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bc2a25e-e50d-4d58-8638-989437096c4d-cilium-config-path\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:40.779099 kubelet[2038]: I0412 18:46:40.779040 2038 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9bc2a25e-e50d-4d58-8638-989437096c4d-host-proc-sys-net\") pod \"cilium-j4bfx\" (UID: \"9bc2a25e-e50d-4d58-8638-989437096c4d\") " pod="kube-system/cilium-j4bfx" Apr 12 18:46:41.026896 env[1143]: time="2024-04-12T18:46:41.026738029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4bfx,Uid:9bc2a25e-e50d-4d58-8638-989437096c4d,Namespace:kube-system,Attempt:0,}" Apr 12 18:46:41.049002 env[1143]: time="2024-04-12T18:46:41.048635050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:46:41.049002 env[1143]: time="2024-04-12T18:46:41.048711516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:46:41.049002 env[1143]: time="2024-04-12T18:46:41.048734646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:46:41.049357 env[1143]: time="2024-04-12T18:46:41.049062537Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91 pid=3894 runtime=io.containerd.runc.v2 Apr 12 18:46:41.067529 systemd[1]: Started cri-containerd-ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91.scope. Apr 12 18:46:41.107090 env[1143]: time="2024-04-12T18:46:41.107012874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j4bfx,Uid:9bc2a25e-e50d-4d58-8638-989437096c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\"" Apr 12 18:46:41.111890 env[1143]: time="2024-04-12T18:46:41.111835791Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:46:41.129430 env[1143]: time="2024-04-12T18:46:41.129379878Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4\"" Apr 12 18:46:41.130870 env[1143]: time="2024-04-12T18:46:41.130822464Z" level=info msg="StartContainer for \"9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4\"" Apr 12 18:46:41.157939 systemd[1]: Started cri-containerd-9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4.scope. Apr 12 18:46:41.205479 env[1143]: time="2024-04-12T18:46:41.205287691Z" level=info msg="StartContainer for \"9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4\" returns successfully" Apr 12 18:46:41.218523 systemd[1]: cri-containerd-9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4.scope: Deactivated successfully. Apr 12 18:46:41.254746 env[1143]: time="2024-04-12T18:46:41.254664774Z" level=info msg="shim disconnected" id=9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4 Apr 12 18:46:41.254746 env[1143]: time="2024-04-12T18:46:41.254733972Z" level=warning msg="cleaning up after shim disconnected" id=9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4 namespace=k8s.io Apr 12 18:46:41.254746 env[1143]: time="2024-04-12T18:46:41.254748854Z" level=info msg="cleaning up dead shim" Apr 12 18:46:41.267314 env[1143]: time="2024-04-12T18:46:41.267235744Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3979 runtime=io.containerd.runc.v2\n" Apr 12 18:46:41.377647 kubelet[2038]: E0412 18:46:41.377597 2038 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:46:41.646724 env[1143]: time="2024-04-12T18:46:41.646576202Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:46:41.671535 env[1143]: time="2024-04-12T18:46:41.671461553Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe\"" Apr 12 18:46:41.672590 env[1143]: time="2024-04-12T18:46:41.672538318Z" level=info msg="StartContainer for \"e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe\"" Apr 12 18:46:41.721225 systemd[1]: Started cri-containerd-e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe.scope. Apr 12 18:46:41.776197 env[1143]: time="2024-04-12T18:46:41.776122928Z" level=info msg="StartContainer for \"e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe\" returns successfully" Apr 12 18:46:41.791845 systemd[1]: cri-containerd-e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe.scope: Deactivated successfully. Apr 12 18:46:41.838073 env[1143]: time="2024-04-12T18:46:41.837997599Z" level=info msg="shim disconnected" id=e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe Apr 12 18:46:41.838589 env[1143]: time="2024-04-12T18:46:41.838543861Z" level=warning msg="cleaning up after shim disconnected" id=e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe namespace=k8s.io Apr 12 18:46:41.838778 env[1143]: time="2024-04-12T18:46:41.838754183Z" level=info msg="cleaning up dead shim" Apr 12 18:46:41.857528 env[1143]: time="2024-04-12T18:46:41.857474262Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4042 runtime=io.containerd.runc.v2\n" Apr 12 18:46:41.869829 kubelet[2038]: W0412 18:46:41.868958 2038 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea3db601_8d28_40a2_9c98_cbcacc32867d.slice/cri-containerd-8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d.scope WatchSource:0}: container "8884896b63fc4d46a69de04a64005d9e02c65f66303f7cffb9d6e0502bce967d" in namespace "k8s.io": not found Apr 12 18:46:42.167846 kubelet[2038]: I0412 18:46:42.167784 2038 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=ea3db601-8d28-40a2-9c98-cbcacc32867d path="/var/lib/kubelet/pods/ea3db601-8d28-40a2-9c98-cbcacc32867d/volumes" Apr 12 18:46:42.367434 systemd[1]: run-containerd-runc-k8s.io-e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe-runc.103F75.mount: Deactivated successfully. Apr 12 18:46:42.367662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe-rootfs.mount: Deactivated successfully. Apr 12 18:46:42.650276 env[1143]: time="2024-04-12T18:46:42.650222072Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:46:42.681288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount432910984.mount: Deactivated successfully. Apr 12 18:46:42.691768 env[1143]: time="2024-04-12T18:46:42.691701771Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3\"" Apr 12 18:46:42.692879 env[1143]: time="2024-04-12T18:46:42.692829307Z" level=info msg="StartContainer for \"378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3\"" Apr 12 18:46:42.731192 systemd[1]: Started cri-containerd-378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3.scope. Apr 12 18:46:42.778751 env[1143]: time="2024-04-12T18:46:42.778691298Z" level=info msg="StartContainer for \"378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3\" returns successfully" Apr 12 18:46:42.785567 systemd[1]: cri-containerd-378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3.scope: Deactivated successfully. Apr 12 18:46:42.820412 env[1143]: time="2024-04-12T18:46:42.820321976Z" level=info msg="shim disconnected" id=378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3 Apr 12 18:46:42.820412 env[1143]: time="2024-04-12T18:46:42.820388392Z" level=warning msg="cleaning up after shim disconnected" id=378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3 namespace=k8s.io Apr 12 18:46:42.820412 env[1143]: time="2024-04-12T18:46:42.820405677Z" level=info msg="cleaning up dead shim" Apr 12 18:46:42.832343 env[1143]: time="2024-04-12T18:46:42.832287186Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4101 runtime=io.containerd.runc.v2\n" Apr 12 18:46:43.367850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3-rootfs.mount: Deactivated successfully. Apr 12 18:46:43.661968 env[1143]: time="2024-04-12T18:46:43.659197255Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:46:43.685804 env[1143]: time="2024-04-12T18:46:43.685735767Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186\"" Apr 12 18:46:43.686840 env[1143]: time="2024-04-12T18:46:43.686772563Z" level=info msg="StartContainer for \"a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186\"" Apr 12 18:46:43.733721 systemd[1]: Started cri-containerd-a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186.scope. Apr 12 18:46:43.774771 systemd[1]: cri-containerd-a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186.scope: Deactivated successfully. Apr 12 18:46:43.777591 env[1143]: time="2024-04-12T18:46:43.777527830Z" level=info msg="StartContainer for \"a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186\" returns successfully" Apr 12 18:46:43.819465 env[1143]: time="2024-04-12T18:46:43.819400005Z" level=info msg="shim disconnected" id=a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186 Apr 12 18:46:43.819923 env[1143]: time="2024-04-12T18:46:43.819813887Z" level=warning msg="cleaning up after shim disconnected" id=a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186 namespace=k8s.io Apr 12 18:46:43.819923 env[1143]: time="2024-04-12T18:46:43.819852771Z" level=info msg="cleaning up dead shim" Apr 12 18:46:43.832729 env[1143]: time="2024-04-12T18:46:43.832659219Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4155 runtime=io.containerd.runc.v2\n" Apr 12 18:46:44.367823 systemd[1]: run-containerd-runc-k8s.io-a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186-runc.TeaUAa.mount: Deactivated successfully. Apr 12 18:46:44.368014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186-rootfs.mount: Deactivated successfully. Apr 12 18:46:44.661762 env[1143]: time="2024-04-12T18:46:44.661399848Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:46:44.689160 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540065198.mount: Deactivated successfully. Apr 12 18:46:44.702061 env[1143]: time="2024-04-12T18:46:44.701979157Z" level=info msg="CreateContainer within sandbox \"ca288d93234a9c09bf2be45c4b38ac4d20016bf00a509d0ee3fc35e7c14d9c91\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f\"" Apr 12 18:46:44.703200 env[1143]: time="2024-04-12T18:46:44.703142080Z" level=info msg="StartContainer for \"2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f\"" Apr 12 18:46:44.745865 systemd[1]: Started cri-containerd-2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f.scope. Apr 12 18:46:44.793199 env[1143]: time="2024-04-12T18:46:44.793144666Z" level=info msg="StartContainer for \"2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f\" returns successfully" Apr 12 18:46:45.004090 kubelet[2038]: W0412 18:46:45.003719 2038 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc2a25e_e50d_4d58_8638_989437096c4d.slice/cri-containerd-9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4.scope WatchSource:0}: task 9cd8e21417846280d7524006f2dc1e8a8cce0b91e5e28873094c77f52ed22bc4 not found: not found Apr 12 18:46:45.249968 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:46:46.368991 systemd[1]: run-containerd-runc-k8s.io-2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f-runc.vPqeeW.mount: Deactivated successfully. Apr 12 18:46:48.113291 kubelet[2038]: W0412 18:46:48.113213 2038 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc2a25e_e50d_4d58_8638_989437096c4d.slice/cri-containerd-e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe.scope WatchSource:0}: task e49becf37f751b7b595b8a27501ed489c44e541738e6f64d10780f7853500cfe not found: not found Apr 12 18:46:48.245543 systemd-networkd[1023]: lxc_health: Link UP Apr 12 18:46:48.280061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:46:48.280516 systemd-networkd[1023]: lxc_health: Gained carrier Apr 12 18:46:48.595754 systemd[1]: run-containerd-runc-k8s.io-2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f-runc.c3xkfV.mount: Deactivated successfully. Apr 12 18:46:49.088128 kubelet[2038]: I0412 18:46:49.088060 2038 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j4bfx" podStartSLOduration=9.088003961 podCreationTimestamp="2024-04-12 18:46:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:46:45.692216605 +0000 UTC m=+129.835607033" watchObservedRunningTime="2024-04-12 18:46:49.088003961 +0000 UTC m=+133.231394444" Apr 12 18:46:49.554790 systemd-networkd[1023]: lxc_health: Gained IPv6LL Apr 12 18:46:50.930739 systemd[1]: run-containerd-runc-k8s.io-2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f-runc.JcwhVB.mount: Deactivated successfully. Apr 12 18:46:51.225835 kubelet[2038]: W0412 18:46:51.225671 2038 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc2a25e_e50d_4d58_8638_989437096c4d.slice/cri-containerd-378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3.scope WatchSource:0}: task 378dd9612cbea6abeaa1850b494085154af1308d407e278d62b78625221bc1f3 not found: not found Apr 12 18:46:53.174987 systemd[1]: run-containerd-runc-k8s.io-2d653657cb2de4cbf8f2a74a763ec2944cb8efc1247e8148dd3f24bac9cb5f9f-runc.2AvJZt.mount: Deactivated successfully. Apr 12 18:46:53.412537 sshd[3755]: pam_unix(sshd:session): session closed for user core Apr 12 18:46:53.418571 systemd[1]: sshd@25-10.128.0.15:22-139.178.89.65:39896.service: Deactivated successfully. Apr 12 18:46:53.419801 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:46:53.420423 systemd-logind[1124]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:46:53.423553 systemd-logind[1124]: Removed session 24. Apr 12 18:46:54.338799 kubelet[2038]: W0412 18:46:54.338736 2038 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bc2a25e_e50d_4d58_8638_989437096c4d.slice/cri-containerd-a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186.scope WatchSource:0}: task a66530602665ea07d071aeddeaa93ce29907be68c91a03afb57d882fc287d186 not found: not found