Jul 2 07:49:54.063633 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:49:54.063671 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:54.063690 kernel: BIOS-provided physical RAM map: Jul 2 07:49:54.063702 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 2 07:49:54.063714 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 2 07:49:54.063726 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 2 07:49:54.063743 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 2 07:49:54.063756 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 2 07:49:54.063769 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jul 2 07:49:54.063782 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 2 07:49:54.063795 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 2 07:49:54.063816 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 2 07:49:54.063829 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 2 07:49:54.063842 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 2 07:49:54.063862 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 2 07:49:54.063877 kernel: NX (Execute Disable) protection: active Jul 2 07:49:54.063891 kernel: efi: EFI v2.70 by EDK II Jul 2 07:49:54.063904 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd2d2018 Jul 2 07:49:54.063918 kernel: random: crng init done Jul 2 07:49:54.063931 kernel: SMBIOS 2.4 present. Jul 2 07:49:54.063945 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jul 2 07:49:54.063958 kernel: Hypervisor detected: KVM Jul 2 07:49:54.063975 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:49:54.063988 kernel: kvm-clock: cpu 0, msr 205192001, primary cpu clock Jul 2 07:49:54.064002 kernel: kvm-clock: using sched offset of 12262396816 cycles Jul 2 07:49:54.064016 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:49:54.064031 kernel: tsc: Detected 2299.998 MHz processor Jul 2 07:49:54.064045 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:49:54.064058 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:49:54.064071 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 2 07:49:54.064084 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:49:54.064098 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 2 07:49:54.064117 kernel: Using GB pages for direct mapping Jul 2 07:49:54.064131 kernel: Secure boot disabled Jul 2 07:49:54.064144 kernel: ACPI: Early table checksum verification disabled Jul 2 07:49:54.064157 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 2 07:49:54.064170 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 2 07:49:54.064184 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 2 07:49:54.064198 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 2 07:49:54.064213 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 2 07:49:54.064237 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jul 2 07:49:54.064252 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 2 07:49:54.064266 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 2 07:49:54.064282 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 2 07:49:54.064297 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 2 07:49:54.064313 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 2 07:49:54.064331 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 2 07:49:54.064346 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 2 07:49:54.064361 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 2 07:49:54.064377 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 2 07:49:54.064392 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 2 07:49:54.064407 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 2 07:49:54.064435 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 2 07:49:54.064451 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 2 07:49:54.064466 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 2 07:49:54.064484 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:49:54.064499 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:49:54.064514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:49:54.064529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 2 07:49:54.064544 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 2 07:49:54.064559 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jul 2 07:49:54.064575 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jul 2 07:49:54.064590 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jul 2 07:49:54.064605 kernel: Zone ranges: Jul 2 07:49:54.064624 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:49:54.064639 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:49:54.064654 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:49:54.064669 kernel: Movable zone start for each node Jul 2 07:49:54.064684 kernel: Early memory node ranges Jul 2 07:49:54.064699 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 2 07:49:54.064714 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 2 07:49:54.064729 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jul 2 07:49:54.064744 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 2 07:49:54.064763 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:49:54.064777 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 2 07:49:54.064792 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:49:54.064813 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 2 07:49:54.064828 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 2 07:49:54.064843 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 2 07:49:54.064858 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 2 07:49:54.064873 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:49:54.064888 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:49:54.064906 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:49:54.064921 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:49:54.064936 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:49:54.064951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:49:54.064966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:49:54.064981 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:49:54.065029 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:49:54.065044 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 2 07:49:54.065183 kernel: Booting paravirtualized kernel on KVM Jul 2 07:49:54.065213 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:49:54.065230 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:49:54.065245 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:49:54.065261 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:49:54.065395 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:49:54.065414 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:49:54.065445 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:49:54.065460 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jul 2 07:49:54.065475 kernel: Policy zone: Normal Jul 2 07:49:54.065618 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:54.065636 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:49:54.065651 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:49:54.065666 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:49:54.065682 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:49:54.065818 kernel: Memory: 7516812K/7860584K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 343512K reserved, 0K cma-reserved) Jul 2 07:49:54.065840 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:49:54.065860 kernel: Kernel/User page tables isolation: enabled Jul 2 07:49:54.065882 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:49:54.065897 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:49:54.065912 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:49:54.065927 kernel: rcu: RCU event tracing is enabled. Jul 2 07:49:54.065943 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:49:54.065958 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:49:54.065974 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:49:54.065990 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:49:54.066007 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:49:54.066028 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:49:54.066057 kernel: Console: colour dummy device 80x25 Jul 2 07:49:54.066074 kernel: printk: console [ttyS0] enabled Jul 2 07:49:54.066095 kernel: ACPI: Core revision 20210730 Jul 2 07:49:54.066112 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:49:54.066129 kernel: x2apic enabled Jul 2 07:49:54.066146 kernel: Switched APIC routing to physical x2apic. Jul 2 07:49:54.066164 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 2 07:49:54.066182 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:49:54.066200 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 2 07:49:54.066221 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 2 07:49:54.066238 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 2 07:49:54.066253 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:49:54.066270 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 07:49:54.066288 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 07:49:54.066303 kernel: Spectre V2 : Mitigation: IBRS Jul 2 07:49:54.066322 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:49:54.066338 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:49:54.066355 kernel: RETBleed: Mitigation: IBRS Jul 2 07:49:54.066371 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:49:54.066389 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Jul 2 07:49:54.066406 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:49:54.066445 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:49:54.066462 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:49:54.066479 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:49:54.066500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:49:54.066516 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:49:54.066534 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:49:54.066552 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:49:54.066570 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:49:54.066587 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:49:54.066605 kernel: LSM: Security Framework initializing Jul 2 07:49:54.066622 kernel: SELinux: Initializing. Jul 2 07:49:54.066640 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:49:54.066663 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:49:54.066681 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 2 07:49:54.066699 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 2 07:49:54.066717 kernel: signal: max sigframe size: 1776 Jul 2 07:49:54.066732 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:49:54.066750 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:49:54.066768 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:49:54.066784 kernel: x86: Booting SMP configuration: Jul 2 07:49:54.066801 kernel: .... node #0, CPUs: #1 Jul 2 07:49:54.066829 kernel: kvm-clock: cpu 1, msr 205192041, secondary cpu clock Jul 2 07:49:54.066846 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:49:54.066864 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:49:54.066881 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:49:54.066897 kernel: smpboot: Max logical packages: 1 Jul 2 07:49:54.066914 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 2 07:49:54.066931 kernel: devtmpfs: initialized Jul 2 07:49:54.066948 kernel: x86/mm: Memory block size: 128MB Jul 2 07:49:54.066965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 2 07:49:54.066986 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:49:54.067004 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:49:54.067021 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:49:54.067038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:49:54.067053 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:49:54.067074 kernel: audit: type=2000 audit(1719906592.686:1): state=initialized audit_enabled=0 res=1 Jul 2 07:49:54.067090 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:49:54.067107 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:49:54.067125 kernel: cpuidle: using governor menu Jul 2 07:49:54.067147 kernel: ACPI: bus type PCI registered Jul 2 07:49:54.067164 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:49:54.067182 kernel: dca service started, version 1.12.1 Jul 2 07:49:54.067200 kernel: PCI: Using configuration type 1 for base access Jul 2 07:49:54.067218 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:49:54.067236 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:49:54.067253 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:49:54.067274 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:49:54.067291 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:49:54.067312 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:49:54.067330 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:49:54.067348 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:49:54.067366 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:49:54.067383 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:49:54.067401 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:49:54.067432 kernel: ACPI: Interpreter enabled Jul 2 07:49:54.067459 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:49:54.067476 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:49:54.067498 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:49:54.067516 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:49:54.067533 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:49:54.067760 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:49:54.067942 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:49:54.067972 kernel: PCI host bridge to bus 0000:00 Jul 2 07:49:54.068137 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:49:54.068308 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:49:54.080521 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:49:54.080687 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 2 07:49:54.080838 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:49:54.081013 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:49:54.081191 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jul 2 07:49:54.081362 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:49:54.086691 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:49:54.086898 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jul 2 07:49:54.087070 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 2 07:49:54.087240 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jul 2 07:49:54.093010 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:49:54.093451 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jul 2 07:49:54.093710 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jul 2 07:49:54.093904 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:49:54.094081 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:49:54.094253 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jul 2 07:49:54.094277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:49:54.094296 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:49:54.094313 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:49:54.094335 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:49:54.094353 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:49:54.094370 kernel: iommu: Default domain type: Translated Jul 2 07:49:54.094388 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:49:54.094406 kernel: vgaarb: loaded Jul 2 07:49:54.094437 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:49:54.094455 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:49:54.094473 kernel: PTP clock support registered Jul 2 07:49:54.094490 kernel: Registered efivars operations Jul 2 07:49:54.094512 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:49:54.094530 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:49:54.094546 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 2 07:49:54.094564 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 2 07:49:54.094581 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 2 07:49:54.094598 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 2 07:49:54.094615 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:49:54.094632 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:49:54.094649 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:49:54.094671 kernel: pnp: PnP ACPI init Jul 2 07:49:54.094688 kernel: pnp: PnP ACPI: found 7 devices Jul 2 07:49:54.094706 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:49:54.094723 kernel: NET: Registered PF_INET protocol family Jul 2 07:49:54.094740 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:49:54.094758 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:49:54.094776 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:49:54.094794 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:49:54.094819 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:49:54.094841 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:49:54.094858 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:49:54.094875 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:49:54.094893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:49:54.094911 kernel: NET: Registered PF_XDP protocol family Jul 2 07:49:54.095081 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:49:54.095234 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:49:54.095379 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:49:54.095556 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 2 07:49:54.095740 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:49:54.095765 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:49:54.095784 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:49:54.095812 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jul 2 07:49:54.095829 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:49:54.095845 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:49:54.095861 kernel: clocksource: Switched to clocksource tsc Jul 2 07:49:54.095882 kernel: Initialise system trusted keyrings Jul 2 07:49:54.095898 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:49:54.095916 kernel: Key type asymmetric registered Jul 2 07:49:54.095934 kernel: Asymmetric key parser 'x509' registered Jul 2 07:49:54.095950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:49:54.095965 kernel: io scheduler mq-deadline registered Jul 2 07:49:54.095981 kernel: io scheduler kyber registered Jul 2 07:49:54.095998 kernel: io scheduler bfq registered Jul 2 07:49:54.096014 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:49:54.096035 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:49:54.096215 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 2 07:49:54.096238 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:49:54.096399 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 2 07:49:54.096580 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:49:54.096762 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 2 07:49:54.096786 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:49:54.096812 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:49:54.096830 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:49:54.096852 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 2 07:49:54.096869 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 2 07:49:54.097037 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 2 07:49:54.097061 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:49:54.097079 kernel: i8042: Warning: Keylock active Jul 2 07:49:54.097096 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:49:54.097113 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:49:54.097283 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:49:54.097466 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:49:54.097624 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:49:53 UTC (1719906593) Jul 2 07:49:54.097771 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:49:54.097792 kernel: intel_pstate: CPU model not supported Jul 2 07:49:54.097819 kernel: pstore: Registered efi as persistent store backend Jul 2 07:49:54.097836 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:49:54.097854 kernel: Segment Routing with IPv6 Jul 2 07:49:54.097870 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:49:54.097892 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:49:54.097909 kernel: Key type dns_resolver registered Jul 2 07:49:54.097927 kernel: IPI shorthand broadcast: enabled Jul 2 07:49:54.097944 kernel: sched_clock: Marking stable (696367220, 121626835)->(828032191, -10038136) Jul 2 07:49:54.097967 kernel: registered taskstats version 1 Jul 2 07:49:54.097984 kernel: Loading compiled-in X.509 certificates Jul 2 07:49:54.098002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:49:54.098020 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:49:54.098037 kernel: Key type .fscrypt registered Jul 2 07:49:54.098056 kernel: Key type fscrypt-provisioning registered Jul 2 07:49:54.098074 kernel: pstore: Using crash dump compression: deflate Jul 2 07:49:54.098091 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:49:54.098108 kernel: ima: No architecture policies found Jul 2 07:49:54.098126 kernel: clk: Disabling unused clocks Jul 2 07:49:54.098143 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:49:54.098160 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:49:54.098177 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:49:54.098198 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:49:54.098216 kernel: Run /init as init process Jul 2 07:49:54.098234 kernel: with arguments: Jul 2 07:49:54.098250 kernel: /init Jul 2 07:49:54.098268 kernel: with environment: Jul 2 07:49:54.098284 kernel: HOME=/ Jul 2 07:49:54.098301 kernel: TERM=linux Jul 2 07:49:54.098319 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:49:54.098340 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:49:54.098365 systemd[1]: Detected virtualization kvm. Jul 2 07:49:54.098384 systemd[1]: Detected architecture x86-64. Jul 2 07:49:54.098402 systemd[1]: Running in initrd. Jul 2 07:49:54.098515 systemd[1]: No hostname configured, using default hostname. Jul 2 07:49:54.098536 systemd[1]: Hostname set to . Jul 2 07:49:54.098555 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:49:54.098574 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:49:54.098598 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:49:54.098616 systemd[1]: Reached target cryptsetup.target. Jul 2 07:49:54.098633 systemd[1]: Reached target paths.target. Jul 2 07:49:54.098651 systemd[1]: Reached target slices.target. Jul 2 07:49:54.098669 systemd[1]: Reached target swap.target. Jul 2 07:49:54.098688 systemd[1]: Reached target timers.target. Jul 2 07:49:54.098708 systemd[1]: Listening on iscsid.socket. Jul 2 07:49:54.098727 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:49:54.098750 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:49:54.098770 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:49:54.098789 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:49:54.098816 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:49:54.098835 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:49:54.098855 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:49:54.098874 systemd[1]: Reached target sockets.target. Jul 2 07:49:54.098893 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:49:54.098914 systemd[1]: Finished network-cleanup.service. Jul 2 07:49:54.098933 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:49:54.098952 systemd[1]: Starting systemd-journald.service... Jul 2 07:49:54.098989 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:49:54.099011 kernel: audit: type=1334 audit(1719906594.060:2): prog-id=6 op=LOAD Jul 2 07:49:54.099030 systemd[1]: Starting systemd-resolved.service... Jul 2 07:49:54.099050 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:49:54.099072 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:49:54.099092 kernel: audit: type=1130 audit(1719906594.080:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.099112 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:49:54.099132 kernel: audit: type=1130 audit(1719906594.086:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.099151 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:49:54.099176 systemd-journald[188]: Journal started Jul 2 07:49:54.099266 systemd-journald[188]: Runtime Journal (/run/log/journal/8c47b993e4881fe104508a0c2e74cb81) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:49:54.060000 audit: BPF prog-id=6 op=LOAD Jul 2 07:49:54.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.102462 systemd[1]: Started systemd-journald.service. Jul 2 07:49:54.110527 kernel: audit: type=1130 audit(1719906594.105:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.103333 systemd-modules-load[189]: Inserted module 'overlay' Jul 2 07:49:54.126583 kernel: audit: type=1130 audit(1719906594.113:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.126617 kernel: audit: type=1130 audit(1719906594.120:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.110341 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:49:54.117652 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:49:54.122860 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:49:54.154786 systemd-resolved[190]: Positive Trust Anchors: Jul 2 07:49:54.155306 systemd-resolved[190]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:49:54.155697 systemd-resolved[190]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:49:54.164665 systemd-resolved[190]: Defaulting to hostname 'linux'. Jul 2 07:49:54.164856 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:49:54.187551 kernel: audit: type=1130 audit(1719906594.165:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.187587 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:49:54.187610 kernel: audit: type=1130 audit(1719906594.173:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.168138 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:49:54.191534 kernel: Bridge firewalling registered Jul 2 07:49:54.191595 dracut-cmdline[204]: dracut-dracut-053 Jul 2 07:49:54.173790 systemd[1]: Started systemd-resolved.service. Jul 2 07:49:54.199648 dracut-cmdline[204]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:49:54.175241 systemd[1]: Reached target nss-lookup.target. Jul 2 07:49:54.190336 systemd-modules-load[189]: Inserted module 'br_netfilter' Jul 2 07:49:54.224441 kernel: SCSI subsystem initialized Jul 2 07:49:54.242978 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:49:54.243057 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:49:54.243672 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:49:54.249321 systemd-modules-load[189]: Inserted module 'dm_multipath' Jul 2 07:49:54.250416 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:49:54.267548 kernel: audit: type=1130 audit(1719906594.256:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.258492 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:49:54.273085 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:49:54.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.291451 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:49:54.312455 kernel: iscsi: registered transport (tcp) Jul 2 07:49:54.338456 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:49:54.338527 kernel: QLogic iSCSI HBA Driver Jul 2 07:49:54.381924 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:49:54.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.384028 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:49:54.439473 kernel: raid6: avx2x4 gen() 18338 MB/s Jul 2 07:49:54.456466 kernel: raid6: avx2x4 xor() 7705 MB/s Jul 2 07:49:54.473462 kernel: raid6: avx2x2 gen() 18401 MB/s Jul 2 07:49:54.490460 kernel: raid6: avx2x2 xor() 18627 MB/s Jul 2 07:49:54.507458 kernel: raid6: avx2x1 gen() 14175 MB/s Jul 2 07:49:54.524458 kernel: raid6: avx2x1 xor() 16204 MB/s Jul 2 07:49:54.541459 kernel: raid6: sse2x4 gen() 11100 MB/s Jul 2 07:49:54.558458 kernel: raid6: sse2x4 xor() 6749 MB/s Jul 2 07:49:54.575461 kernel: raid6: sse2x2 gen() 12055 MB/s Jul 2 07:49:54.592459 kernel: raid6: sse2x2 xor() 7478 MB/s Jul 2 07:49:54.609459 kernel: raid6: sse2x1 gen() 10595 MB/s Jul 2 07:49:54.626858 kernel: raid6: sse2x1 xor() 5202 MB/s Jul 2 07:49:54.626892 kernel: raid6: using algorithm avx2x2 gen() 18401 MB/s Jul 2 07:49:54.626914 kernel: raid6: .... xor() 18627 MB/s, rmw enabled Jul 2 07:49:54.627565 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:49:54.642457 kernel: xor: automatically using best checksumming function avx Jul 2 07:49:54.747447 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:49:54.758436 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:49:54.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.757000 audit: BPF prog-id=7 op=LOAD Jul 2 07:49:54.757000 audit: BPF prog-id=8 op=LOAD Jul 2 07:49:54.759963 systemd[1]: Starting systemd-udevd.service... Jul 2 07:49:54.777217 systemd-udevd[388]: Using default interface naming scheme 'v252'. Jul 2 07:49:54.784542 systemd[1]: Started systemd-udevd.service. Jul 2 07:49:54.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.789753 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:49:54.811381 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jul 2 07:49:54.849221 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:49:54.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.851138 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:49:54.918207 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:49:54.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:54.999444 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:49:55.024452 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:49:55.031440 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 2 07:49:55.086833 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:49:55.086912 kernel: AES CTR mode by8 optimization enabled Jul 2 07:49:55.123471 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 2 07:49:55.123716 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 2 07:49:55.124614 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 2 07:49:55.124868 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 2 07:49:55.126549 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 07:49:55.135626 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:49:55.135669 kernel: GPT:17805311 != 25165823 Jul 2 07:49:55.135706 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:49:55.135727 kernel: GPT:17805311 != 25165823 Jul 2 07:49:55.135747 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:49:55.135767 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:55.139458 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 2 07:49:55.184587 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:49:55.188562 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (440) Jul 2 07:49:55.204586 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:49:55.209665 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:49:55.210659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:49:55.230342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:49:55.232162 systemd[1]: Starting disk-uuid.service... Jul 2 07:49:55.250433 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:55.250543 disk-uuid[518]: Primary Header is updated. Jul 2 07:49:55.250543 disk-uuid[518]: Secondary Entries is updated. Jul 2 07:49:55.250543 disk-uuid[518]: Secondary Header is updated. Jul 2 07:49:56.278464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:49:56.278541 disk-uuid[519]: The operation has completed successfully. Jul 2 07:49:56.338515 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:49:56.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.338659 systemd[1]: Finished disk-uuid.service. Jul 2 07:49:56.353669 systemd[1]: Starting verity-setup.service... Jul 2 07:49:56.380454 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:49:56.453858 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:49:56.467831 systemd[1]: Finished verity-setup.service. Jul 2 07:49:56.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.469115 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:49:56.565505 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:49:56.565522 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:49:56.572730 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:49:56.573662 systemd[1]: Starting ignition-setup.service... Jul 2 07:49:56.628585 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:56.628625 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:56.628651 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:56.628673 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:56.616709 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:49:56.657174 systemd[1]: Finished ignition-setup.service. Jul 2 07:49:56.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.658794 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:49:56.742776 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:49:56.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.742000 audit: BPF prog-id=9 op=LOAD Jul 2 07:49:56.744858 systemd[1]: Starting systemd-networkd.service... Jul 2 07:49:56.778654 systemd-networkd[693]: lo: Link UP Jul 2 07:49:56.779154 systemd-networkd[693]: lo: Gained carrier Jul 2 07:49:56.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.779994 systemd-networkd[693]: Enumeration completed Jul 2 07:49:56.780582 systemd[1]: Started systemd-networkd.service. Jul 2 07:49:56.780671 systemd-networkd[693]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:49:56.782959 systemd-networkd[693]: eth0: Link UP Jul 2 07:49:56.782967 systemd-networkd[693]: eth0: Gained carrier Jul 2 07:49:56.793693 systemd[1]: Reached target network.target. Jul 2 07:49:56.793706 systemd-networkd[693]: eth0: DHCPv4 address 10.128.0.103/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:49:56.809651 systemd[1]: Starting iscsiuio.service... Jul 2 07:49:56.881741 systemd[1]: Started iscsiuio.service. Jul 2 07:49:56.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.889863 systemd[1]: Starting iscsid.service... Jul 2 07:49:56.902696 iscsid[703]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:56.902696 iscsid[703]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:49:56.902696 iscsid[703]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:49:56.902696 iscsid[703]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:49:56.902696 iscsid[703]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:49:56.902696 iscsid[703]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:49:56.902696 iscsid[703]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:49:56.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.948750 systemd[1]: Started iscsid.service. Jul 2 07:49:56.905540 ignition[617]: Ignition 2.14.0 Jul 2 07:49:57.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.976967 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:49:57.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.905553 ignition[617]: Stage: fetch-offline Jul 2 07:49:56.993824 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:49:56.905623 ignition[617]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:57.016652 systemd[1]: Starting ignition-fetch.service... Jul 2 07:49:56.905669 ignition[617]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:57.041022 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:49:56.927952 ignition[617]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:57.049480 unknown[712]: fetched base config from "system" Jul 2 07:49:57.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.928158 ignition[617]: parsed url from cmdline: "" Jul 2 07:49:57.049502 unknown[712]: fetched base config from "system" Jul 2 07:49:57.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.928166 ignition[617]: no config URL provided Jul 2 07:49:57.049518 unknown[712]: fetched user config from "gcp" Jul 2 07:49:57.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:56.928175 ignition[617]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:49:57.055867 systemd[1]: Finished ignition-fetch.service. Jul 2 07:49:56.928187 ignition[617]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:49:57.070846 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:49:56.928196 ignition[617]: failed to fetch config: resource requires networking Jul 2 07:49:57.086619 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:49:56.928653 ignition[617]: Ignition finished successfully Jul 2 07:49:57.101684 systemd[1]: Reached target remote-fs.target. Jul 2 07:49:57.027829 ignition[712]: Ignition 2.14.0 Jul 2 07:49:57.110709 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:49:57.027837 ignition[712]: Stage: fetch Jul 2 07:49:57.134876 systemd[1]: Starting ignition-kargs.service... Jul 2 07:49:57.027947 ignition[712]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:57.158136 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:49:57.027978 ignition[712]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:57.175886 systemd[1]: Finished ignition-kargs.service. Jul 2 07:49:57.037378 ignition[712]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:57.192757 systemd[1]: Starting ignition-disks.service... Jul 2 07:49:57.037591 ignition[712]: parsed url from cmdline: "" Jul 2 07:49:57.213334 systemd[1]: Finished ignition-disks.service. Jul 2 07:49:57.037597 ignition[712]: no config URL provided Jul 2 07:49:57.221770 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:49:57.037604 ignition[712]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:49:57.236637 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:49:57.037615 ignition[712]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:49:57.246678 systemd[1]: Reached target local-fs.target. Jul 2 07:49:57.037649 ignition[712]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 2 07:49:57.268525 systemd[1]: Reached target sysinit.target. Jul 2 07:49:57.044408 ignition[712]: GET result: OK Jul 2 07:49:57.282566 systemd[1]: Reached target basic.target. Jul 2 07:49:57.044553 ignition[712]: parsing config with SHA512: 2847c3cfc92a66f4fde049cf5d171d2e02a5b239d87e2eb5f0df2aef2440e01c7c3a4cd34b4e359747b6fe0f0c98f996078c963626cc0a6c7e0746806a97ab71 Jul 2 07:49:57.295760 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:49:57.051277 ignition[712]: fetch: fetch complete Jul 2 07:49:57.051284 ignition[712]: fetch: fetch passed Jul 2 07:49:57.051332 ignition[712]: Ignition finished successfully Jul 2 07:49:57.147176 ignition[723]: Ignition 2.14.0 Jul 2 07:49:57.147186 ignition[723]: Stage: kargs Jul 2 07:49:57.147306 ignition[723]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:57.147337 ignition[723]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:57.155104 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:57.157486 ignition[723]: kargs: kargs passed Jul 2 07:49:57.157547 ignition[723]: Ignition finished successfully Jul 2 07:49:57.204210 ignition[729]: Ignition 2.14.0 Jul 2 07:49:57.204219 ignition[729]: Stage: disks Jul 2 07:49:57.204342 ignition[729]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:57.204378 ignition[729]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:57.211229 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:57.212486 ignition[729]: disks: disks passed Jul 2 07:49:57.212532 ignition[729]: Ignition finished successfully Jul 2 07:49:57.336227 systemd-fsck[737]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 07:49:57.544400 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:49:57.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:57.553834 systemd[1]: Mounting sysroot.mount... Jul 2 07:49:57.578450 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:49:57.583746 systemd[1]: Mounted sysroot.mount. Jul 2 07:49:57.590707 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:49:57.609011 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:49:57.621138 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:49:57.621201 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:49:57.621239 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:49:57.712582 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (743) Jul 2 07:49:57.712629 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:57.712653 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:57.712675 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:57.638039 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:49:57.725557 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:57.660408 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:49:57.734595 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:49:57.705630 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:49:57.752567 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:49:57.762532 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:49:57.777559 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:49:57.765244 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:49:57.804027 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:49:57.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:57.805324 systemd[1]: Starting ignition-mount.service... Jul 2 07:49:57.826536 systemd[1]: Starting sysroot-boot.service... Jul 2 07:49:57.848981 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:49:57.849144 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:49:57.863363 systemd[1]: Finished sysroot-boot.service. Jul 2 07:49:57.879594 ignition[809]: INFO : Ignition 2.14.0 Jul 2 07:49:57.879594 ignition[809]: INFO : Stage: mount Jul 2 07:49:57.879594 ignition[809]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:57.879594 ignition[809]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:57.879594 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:57.879594 ignition[809]: INFO : mount: mount passed Jul 2 07:49:57.879594 ignition[809]: INFO : Ignition finished successfully Jul 2 07:49:58.014559 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (818) Jul 2 07:49:58.014590 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:49:58.014606 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:49:58.014620 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:49:58.014634 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:49:57.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:57.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:57.887837 systemd[1]: Finished ignition-mount.service. Jul 2 07:49:57.903561 systemd[1]: Starting ignition-files.service... Jul 2 07:49:57.913264 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:49:58.046560 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (840) Jul 2 07:49:57.963223 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:49:58.055561 ignition[837]: INFO : Ignition 2.14.0 Jul 2 07:49:58.055561 ignition[837]: INFO : Stage: files Jul 2 07:49:58.055561 ignition[837]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:49:58.055561 ignition[837]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:49:58.055561 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:49:58.055561 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:49:58.055561 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:49:58.055561 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:49:58.055561 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:49:58.055561 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:49:58.055561 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1296817238" Jul 2 07:49:58.055561 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1296817238": device or resource busy Jul 2 07:49:58.055561 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1296817238", trying btrfs: device or resource busy Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1296817238" Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1296817238" Jul 2 07:49:58.055561 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1296817238" Jul 2 07:49:58.017933 unknown[837]: wrote ssh authorized keys file for user: core Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1296817238" Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:49:58.325541 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:49:58.266590 systemd-networkd[693]: eth0: Gained IPv6LL Jul 2 07:49:58.598357 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Jul 2 07:49:58.757092 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3419724368" Jul 2 07:49:58.772562 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3419724368": device or resource busy Jul 2 07:49:58.772562 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3419724368", trying btrfs: device or resource busy Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3419724368" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3419724368" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem3419724368" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem3419724368" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:49:58.772562 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem540936525" Jul 2 07:49:59.017589 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem540936525": device or resource busy Jul 2 07:49:59.017589 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem540936525", trying btrfs: device or resource busy Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem540936525" Jul 2 07:49:59.017589 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem540936525" Jul 2 07:49:58.790526 systemd[1]: mnt-oem540936525.mount: Deactivated successfully. Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem540936525" Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem540936525" Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:49:59.270584 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666236625" Jul 2 07:49:59.365558 ignition[837]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666236625": device or resource busy Jul 2 07:49:59.365558 ignition[837]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2666236625", trying btrfs: device or resource busy Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666236625" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2666236625" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem2666236625" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2666236625" Jul 2 07:49:59.365558 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:49:59.365558 ignition[837]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:49:59.876583 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 2 07:49:59.876633 kernel: audit: type=1130 audit(1719906599.390:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876658 kernel: audit: type=1130 audit(1719906599.488:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876682 kernel: audit: type=1130 audit(1719906599.529:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876708 kernel: audit: type=1131 audit(1719906599.529:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876723 kernel: audit: type=1130 audit(1719906599.651:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876737 kernel: audit: type=1131 audit(1719906599.672:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.876752 kernel: audit: type=1130 audit(1719906599.769:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.370122 systemd[1]: Finished ignition-files.service. Jul 2 07:49:59.911573 kernel: audit: type=1131 audit(1719906599.883:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:49:59.911676 ignition[837]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:59.911676 ignition[837]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:49:59.911676 ignition[837]: INFO : files: files passed Jul 2 07:49:59.911676 ignition[837]: INFO : Ignition finished successfully Jul 2 07:50:00.247553 kernel: audit: type=1131 audit(1719906600.135:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.247602 kernel: audit: type=1131 audit(1719906600.197:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.401025 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:50:00.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.433697 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:50:00.282642 initrd-setup-root-after-ignition[860]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:49:59.434719 systemd[1]: Starting ignition-quench.service... Jul 2 07:50:00.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.462890 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:49:59.489893 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:50:00.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.490028 systemd[1]: Finished ignition-quench.service. Jul 2 07:50:00.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.530904 systemd[1]: Reached target ignition-complete.target. Jul 2 07:49:59.597752 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:50:00.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.642758 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:50:00.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.407702 ignition[875]: INFO : Ignition 2.14.0 Jul 2 07:50:00.407702 ignition[875]: INFO : Stage: umount Jul 2 07:50:00.407702 ignition[875]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:50:00.407702 ignition[875]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:50:00.407702 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:50:00.407702 ignition[875]: INFO : umount: umount passed Jul 2 07:50:00.407702 ignition[875]: INFO : Ignition finished successfully Jul 2 07:50:00.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.642872 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:49:59.674158 systemd[1]: Reached target initrd-fs.target. Jul 2 07:49:59.707748 systemd[1]: Reached target initrd.target. Jul 2 07:49:59.728779 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:49:59.729932 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:50:00.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.757883 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:50:00.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.771994 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:49:59.817353 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:49:59.823824 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:49:59.840857 systemd[1]: Stopped target timers.target. Jul 2 07:50:00.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.858844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:50:00.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.859023 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:50:00.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.682000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:49:59.885011 systemd[1]: Stopped target initrd.target. Jul 2 07:49:59.918811 systemd[1]: Stopped target basic.target. Jul 2 07:49:59.929870 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:50:00.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.975829 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:50:00.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:49:59.986885 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:50:00.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.005893 systemd[1]: Stopped target remote-fs.target. Jul 2 07:50:00.026859 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:50:00.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.044910 systemd[1]: Stopped target sysinit.target. Jul 2 07:50:00.062862 systemd[1]: Stopped target local-fs.target. Jul 2 07:50:00.081841 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:50:00.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.100848 systemd[1]: Stopped target swap.target. Jul 2 07:50:00.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.118775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:50:00.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.118966 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:50:00.137013 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:50:00.177812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:50:00.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.177994 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:50:00.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.198972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:50:00.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:00.199223 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:50:00.232914 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:50:00.233082 systemd[1]: Stopped ignition-files.service. Jul 2 07:50:00.257190 systemd[1]: Stopping ignition-mount.service... Jul 2 07:50:00.289553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:50:01.006674 systemd-journald[188]: Received SIGTERM from PID 1 (n/a). Jul 2 07:50:00.289766 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:50:01.013558 iscsid[703]: iscsid shutting down. Jul 2 07:50:00.312900 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:50:00.325546 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:50:00.325802 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:50:00.342839 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:50:00.343010 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:50:00.363374 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:50:00.364647 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:50:00.364754 systemd[1]: Stopped ignition-mount.service. Jul 2 07:50:00.384263 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:50:00.384369 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:50:00.399334 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:50:00.399496 systemd[1]: Stopped ignition-disks.service. Jul 2 07:50:00.415694 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:50:00.415757 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:50:00.422802 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:50:00.422900 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:50:00.434757 systemd[1]: Stopped target network.target. Jul 2 07:50:00.451717 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:50:00.451788 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:50:00.476795 systemd[1]: Stopped target paths.target. Jul 2 07:50:00.493672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:50:00.497534 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:50:00.505680 systemd[1]: Stopped target slices.target. Jul 2 07:50:00.527609 systemd[1]: Stopped target sockets.target. Jul 2 07:50:00.542665 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:50:00.542713 systemd[1]: Closed iscsid.socket. Jul 2 07:50:00.549850 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:50:00.549890 systemd[1]: Closed iscsiuio.socket. Jul 2 07:50:00.562752 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:50:00.562816 systemd[1]: Stopped ignition-setup.service. Jul 2 07:50:00.579811 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:50:00.579874 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:50:00.601983 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:50:00.605477 systemd-networkd[693]: eth0: DHCPv6 lease lost Jul 2 07:50:00.616838 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:50:00.631232 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:50:00.631359 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:50:00.653312 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:50:00.653452 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:50:00.668396 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:50:00.668537 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:50:00.684814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:50:00.684855 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:50:00.699582 systemd[1]: Stopping network-cleanup.service... Jul 2 07:50:00.712517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:50:00.712616 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:50:00.727681 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:50:00.727755 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:50:00.743768 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:50:00.743831 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:50:00.759780 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:50:00.778187 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:50:00.778857 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:50:00.779003 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:50:00.785144 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:50:00.785255 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:50:00.805683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:50:00.805735 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:50:00.820636 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:50:00.820696 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:50:00.827742 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:50:00.827795 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:50:00.849707 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:50:00.849772 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:50:00.865671 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:50:00.887530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:50:00.887629 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:50:00.903132 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:50:00.903261 systemd[1]: Stopped network-cleanup.service. Jul 2 07:50:00.918901 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:50:00.919013 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:50:00.933797 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:50:00.952796 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:50:00.974560 systemd[1]: Switching root. Jul 2 07:50:01.016661 systemd-journald[188]: Journal stopped Jul 2 07:50:05.556095 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:50:05.556251 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:50:05.556291 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:50:05.556319 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:50:05.556345 kernel: SELinux: policy capability open_perms=1 Jul 2 07:50:05.556372 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:50:05.556399 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:50:05.556452 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:50:05.556486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:50:05.556511 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:50:05.556536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:50:05.556565 systemd[1]: Successfully loaded SELinux policy in 111.090ms. Jul 2 07:50:05.556616 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.238ms. Jul 2 07:50:05.556644 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:50:05.556676 systemd[1]: Detected virtualization kvm. Jul 2 07:50:05.556702 systemd[1]: Detected architecture x86-64. Jul 2 07:50:05.556727 systemd[1]: Detected first boot. Jul 2 07:50:05.556761 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:50:05.556794 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:50:05.556823 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:50:05.556852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:50:05.556889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:50:05.556927 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:50:05.556973 kernel: kauditd_printk_skb: 46 callbacks suppressed Jul 2 07:50:05.556998 kernel: audit: type=1334 audit(1719906604.645:87): prog-id=12 op=LOAD Jul 2 07:50:05.557023 kernel: audit: type=1334 audit(1719906604.645:88): prog-id=3 op=UNLOAD Jul 2 07:50:05.557048 kernel: audit: type=1334 audit(1719906604.651:89): prog-id=13 op=LOAD Jul 2 07:50:05.557075 kernel: audit: type=1334 audit(1719906604.658:90): prog-id=14 op=LOAD Jul 2 07:50:05.557100 kernel: audit: type=1334 audit(1719906604.658:91): prog-id=4 op=UNLOAD Jul 2 07:50:05.557127 kernel: audit: type=1334 audit(1719906604.658:92): prog-id=5 op=UNLOAD Jul 2 07:50:05.557153 kernel: audit: type=1334 audit(1719906604.665:93): prog-id=15 op=LOAD Jul 2 07:50:05.557180 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:50:05.557212 kernel: audit: type=1334 audit(1719906604.665:94): prog-id=12 op=UNLOAD Jul 2 07:50:05.557239 kernel: audit: type=1334 audit(1719906604.672:95): prog-id=16 op=LOAD Jul 2 07:50:05.557267 systemd[1]: Stopped iscsiuio.service. Jul 2 07:50:05.557294 kernel: audit: type=1334 audit(1719906604.678:96): prog-id=17 op=LOAD Jul 2 07:50:05.557322 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:50:05.557351 systemd[1]: Stopped iscsid.service. Jul 2 07:50:05.557384 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:50:05.557411 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:50:05.557502 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:50:05.557541 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:50:05.557571 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:50:05.557600 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:50:05.557629 systemd[1]: Created slice system-getty.slice. Jul 2 07:50:05.557657 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:50:05.557685 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:50:05.557712 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:50:05.557746 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:50:05.557773 systemd[1]: Created slice user.slice. Jul 2 07:50:05.557801 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:50:05.557835 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:50:05.557862 systemd[1]: Set up automount boot.automount. Jul 2 07:50:05.557890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:50:05.557916 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:50:05.557944 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:50:05.557983 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:50:05.558016 systemd[1]: Reached target integritysetup.target. Jul 2 07:50:05.558044 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:50:05.558072 systemd[1]: Reached target remote-fs.target. Jul 2 07:50:05.558101 systemd[1]: Reached target slices.target. Jul 2 07:50:05.558128 systemd[1]: Reached target swap.target. Jul 2 07:50:05.558154 systemd[1]: Reached target torcx.target. Jul 2 07:50:05.558181 systemd[1]: Reached target veritysetup.target. Jul 2 07:50:05.558208 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:50:05.558237 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:50:05.558267 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:50:05.558302 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:50:05.558328 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:50:05.558353 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:50:05.558382 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:50:05.558409 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:50:05.558464 systemd[1]: Mounting media.mount... Jul 2 07:50:05.558493 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:05.558520 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:50:05.558549 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:50:05.558582 systemd[1]: Mounting tmp.mount... Jul 2 07:50:05.558610 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:50:05.558637 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:05.558666 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:50:05.558694 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:50:05.558721 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:05.558749 systemd[1]: Starting modprobe@drm.service... Jul 2 07:50:05.558776 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:05.558804 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:50:05.558837 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:05.558862 kernel: fuse: init (API version 7.34) Jul 2 07:50:05.558893 kernel: loop: module loaded Jul 2 07:50:05.558920 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:50:05.558953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:50:05.558989 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:50:05.559017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:50:05.559046 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:50:05.559074 systemd[1]: Stopped systemd-journald.service. Jul 2 07:50:05.559108 systemd[1]: Starting systemd-journald.service... Jul 2 07:50:05.559136 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:50:05.559164 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:50:05.559192 systemd-journald[999]: Journal started Jul 2 07:50:05.559290 systemd-journald[999]: Runtime Journal (/run/log/journal/8c47b993e4881fe104508a0c2e74cb81) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:50:01.015000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:50:01.293000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:50:01.439000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:50:01.439000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:50:01.439000 audit: BPF prog-id=10 op=LOAD Jul 2 07:50:01.439000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:50:01.440000 audit: BPF prog-id=11 op=LOAD Jul 2 07:50:01.440000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:50:01.583000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:50:01.583000 audit[908]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:01.583000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:50:01.593000 audit[908]: AVC avc: denied { associate } for pid=908 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:50:01.593000 audit[908]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=891 pid=908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:01.593000 audit: CWD cwd="/" Jul 2 07:50:01.593000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:01.593000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:01.593000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:50:04.645000 audit: BPF prog-id=12 op=LOAD Jul 2 07:50:04.645000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:50:04.651000 audit: BPF prog-id=13 op=LOAD Jul 2 07:50:04.658000 audit: BPF prog-id=14 op=LOAD Jul 2 07:50:04.658000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:50:04.658000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:50:04.665000 audit: BPF prog-id=15 op=LOAD Jul 2 07:50:04.665000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:50:04.672000 audit: BPF prog-id=16 op=LOAD Jul 2 07:50:04.678000 audit: BPF prog-id=17 op=LOAD Jul 2 07:50:04.678000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:50:04.678000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:50:04.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:04.732000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:50:04.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:04.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:04.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:04.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.510000 audit: BPF prog-id=18 op=LOAD Jul 2 07:50:05.510000 audit: BPF prog-id=19 op=LOAD Jul 2 07:50:05.510000 audit: BPF prog-id=20 op=LOAD Jul 2 07:50:05.510000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:50:05.510000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:50:05.552000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:50:05.552000 audit[999]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcc1c177b0 a2=4000 a3=7ffcc1c1784c items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:05.552000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:50:04.645136 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:50:01.578838 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:50:04.681364 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:50:01.579829 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:50:01.579864 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:50:01.579917 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:50:01.579937 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:50:01.579991 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:50:01.580018 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:50:01.580323 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:50:01.580410 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:50:01.580458 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:50:01.583029 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:50:01.583098 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:50:01.583132 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:50:01.583161 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:50:01.583196 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:50:01.583223 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:01Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:50:04.054438 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:50:04.054782 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:50:04.054933 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:50:04.055201 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:50:04.055268 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:50:04.055349 /usr/lib/systemd/system-generators/torcx-generator[908]: time="2024-07-02T07:50:04Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:50:05.568456 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:50:05.583455 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:50:05.596458 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:50:05.602450 systemd[1]: Stopped verity-setup.service. Jul 2 07:50:05.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.621445 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:05.630457 systemd[1]: Started systemd-journald.service. Jul 2 07:50:05.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.639971 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:50:05.646757 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:50:05.653736 systemd[1]: Mounted media.mount. Jul 2 07:50:05.660719 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:50:05.669732 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:50:05.678719 systemd[1]: Mounted tmp.mount. Jul 2 07:50:05.685882 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:50:05.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.694912 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:50:05.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.703870 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:50:05.704072 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:50:05.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.712874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:05.713073 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:05.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.721891 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:50:05.722175 systemd[1]: Finished modprobe@drm.service. Jul 2 07:50:05.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.730899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:05.731113 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:05.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.739854 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:50:05.740054 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:50:05.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.748954 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:05.749169 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:05.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.757941 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:50:05.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.766863 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:50:05.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.775895 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:50:05.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.784864 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:50:05.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.794184 systemd[1]: Reached target network-pre.target. Jul 2 07:50:05.803992 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:50:05.814003 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:50:05.821550 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:50:05.825860 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:50:05.834924 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:50:05.844564 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:05.846226 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:50:05.855442 systemd-journald[999]: Time spent on flushing to /var/log/journal/8c47b993e4881fe104508a0c2e74cb81 is 51.402ms for 1150 entries. Jul 2 07:50:05.855442 systemd-journald[999]: System Journal (/var/log/journal/8c47b993e4881fe104508a0c2e74cb81) is 8.0M, max 584.8M, 576.8M free. Jul 2 07:50:05.960000 systemd-journald[999]: Received client request to flush runtime journal. Jul 2 07:50:05.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:05.853584 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:05.855138 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:50:05.872114 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:50:05.881043 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:50:05.962335 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:50:05.892496 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:50:05.901697 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:50:05.909895 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:50:05.922118 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:50:05.931104 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:50:05.940149 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:50:05.961199 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:50:05.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:06.502044 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:50:06.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:06.510000 audit: BPF prog-id=21 op=LOAD Jul 2 07:50:06.511000 audit: BPF prog-id=22 op=LOAD Jul 2 07:50:06.511000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:50:06.511000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:50:06.513392 systemd[1]: Starting systemd-udevd.service... Jul 2 07:50:06.535138 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Jul 2 07:50:06.580774 systemd[1]: Started systemd-udevd.service. Jul 2 07:50:06.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:06.590000 audit: BPF prog-id=23 op=LOAD Jul 2 07:50:06.593001 systemd[1]: Starting systemd-networkd.service... Jul 2 07:50:06.605000 audit: BPF prog-id=24 op=LOAD Jul 2 07:50:06.606000 audit: BPF prog-id=25 op=LOAD Jul 2 07:50:06.606000 audit: BPF prog-id=26 op=LOAD Jul 2 07:50:06.608732 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:50:06.669827 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:50:06.692702 systemd[1]: Started systemd-userdbd.service. Jul 2 07:50:06.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:06.773444 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:50:06.777000 audit[1029]: AVC avc: denied { confidentiality } for pid=1029 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:50:06.832456 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:50:06.841447 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:50:06.841879 systemd-networkd[1031]: lo: Link UP Jul 2 07:50:06.841903 systemd-networkd[1031]: lo: Gained carrier Jul 2 07:50:06.842673 systemd-networkd[1031]: Enumeration completed Jul 2 07:50:06.842834 systemd[1]: Started systemd-networkd.service. Jul 2 07:50:06.843160 systemd-networkd[1031]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:50:06.845147 systemd-networkd[1031]: eth0: Link UP Jul 2 07:50:06.845163 systemd-networkd[1031]: eth0: Gained carrier Jul 2 07:50:06.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:06.777000 audit[1029]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555742e726e0 a1=3207c a2=7f1fff017bc5 a3=5 items=108 ppid=1017 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:06.777000 audit: CWD cwd="/" Jul 2 07:50:06.777000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=1 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=2 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=3 name=(null) inode=14114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=4 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=5 name=(null) inode=14115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=6 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=7 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=8 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=9 name=(null) inode=14117 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=10 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=11 name=(null) inode=14118 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=12 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=13 name=(null) inode=14119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=14 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=15 name=(null) inode=14120 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=16 name=(null) inode=14116 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=17 name=(null) inode=14121 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=18 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=19 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=20 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=21 name=(null) inode=14123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=22 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=23 name=(null) inode=14124 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=24 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=25 name=(null) inode=14125 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=26 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=27 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=28 name=(null) inode=14122 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=29 name=(null) inode=14127 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=30 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=31 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=32 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=33 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=34 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=35 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=36 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=37 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=38 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=39 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=40 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=41 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=42 name=(null) inode=14113 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=43 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=44 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=45 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=46 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=47 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=48 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=49 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=50 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=51 name=(null) inode=14138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=52 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=53 name=(null) inode=14139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=55 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=56 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=57 name=(null) inode=14141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=58 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=59 name=(null) inode=14142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=60 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=61 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=62 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=63 name=(null) inode=14144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=64 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=65 name=(null) inode=14145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=66 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=67 name=(null) inode=14146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=68 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=69 name=(null) inode=14147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=70 name=(null) inode=14143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=71 name=(null) inode=14148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=72 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=73 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=74 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=75 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=76 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=77 name=(null) inode=14151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=78 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=79 name=(null) inode=14152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=80 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=81 name=(null) inode=14153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=82 name=(null) inode=14149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=83 name=(null) inode=14154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=84 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=85 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=86 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=87 name=(null) inode=14156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=88 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=89 name=(null) inode=14157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=90 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=91 name=(null) inode=14158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=92 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=93 name=(null) inode=14159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=94 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=95 name=(null) inode=14160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=96 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=97 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=98 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=99 name=(null) inode=14162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=100 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=101 name=(null) inode=14163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=102 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=103 name=(null) inode=14164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=104 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=105 name=(null) inode=14165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=106 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PATH item=107 name=(null) inode=14166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:50:06.777000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:50:06.872605 systemd-networkd[1031]: eth0: DHCPv4 address 10.128.0.103/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:50:06.888466 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jul 2 07:50:06.905455 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 2 07:50:06.911440 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:50:06.935472 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 2 07:50:06.961511 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1026) Jul 2 07:50:06.971459 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:50:06.997629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:50:07.006877 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:50:07.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.017085 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:50:07.043659 lvm[1055]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:50:07.071708 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:50:07.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.080749 systemd[1]: Reached target cryptsetup.target. Jul 2 07:50:07.090987 systemd[1]: Starting lvm2-activation.service... Jul 2 07:50:07.097055 lvm[1056]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:50:07.123766 systemd[1]: Finished lvm2-activation.service. Jul 2 07:50:07.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.132759 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:50:07.141558 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:50:07.141611 systemd[1]: Reached target local-fs.target. Jul 2 07:50:07.151550 systemd[1]: Reached target machines.target. Jul 2 07:50:07.161059 systemd[1]: Starting ldconfig.service... Jul 2 07:50:07.168593 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:07.168689 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:07.170321 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:50:07.179084 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:50:07.189047 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:50:07.199197 systemd[1]: Starting systemd-sysext.service... Jul 2 07:50:07.208068 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1058 (bootctl) Jul 2 07:50:07.209738 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:50:07.226167 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:50:07.236452 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:50:07.236711 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:50:07.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.241340 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:50:07.258451 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:50:07.347129 systemd-fsck[1068]: fsck.fat 4.2 (2021-01-31) Jul 2 07:50:07.347129 systemd-fsck[1068]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:50:07.350811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:50:07.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.365312 systemd[1]: Mounting boot.mount... Jul 2 07:50:07.425229 systemd[1]: Mounted boot.mount. Jul 2 07:50:07.444818 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:50:07.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.698555 ldconfig[1057]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:50:07.734492 systemd[1]: Finished ldconfig.service. Jul 2 07:50:07.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.742762 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:50:07.743484 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:50:07.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.768599 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:50:07.795464 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:50:07.817486 (sd-sysext)[1074]: Using extensions 'kubernetes'. Jul 2 07:50:07.818108 (sd-sysext)[1074]: Merged extensions into '/usr'. Jul 2 07:50:07.839466 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:07.841346 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:50:07.848810 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:07.850691 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:07.859278 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:07.868343 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:07.875604 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:07.875828 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:07.876033 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:07.880273 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:50:07.889024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:07.889256 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:07.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.898155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:07.898351 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:07.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.907219 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:07.907476 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:07.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.917251 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:07.917388 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:07.918851 systemd[1]: Finished systemd-sysext.service. Jul 2 07:50:07.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:07.929158 systemd[1]: Starting ensure-sysext.service... Jul 2 07:50:07.937892 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:50:07.950015 systemd[1]: Reloading. Jul 2 07:50:07.966568 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:50:07.973283 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:50:07.989804 systemd-tmpfiles[1081]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:50:08.064196 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-07-02T07:50:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:50:08.064241 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-07-02T07:50:08Z" level=info msg="torcx already run" Jul 2 07:50:08.187670 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:50:08.187703 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:50:08.230130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:50:08.307000 audit: BPF prog-id=27 op=LOAD Jul 2 07:50:08.307000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:50:08.307000 audit: BPF prog-id=28 op=LOAD Jul 2 07:50:08.307000 audit: BPF prog-id=29 op=LOAD Jul 2 07:50:08.307000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:50:08.307000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:50:08.308000 audit: BPF prog-id=30 op=LOAD Jul 2 07:50:08.308000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:50:08.309000 audit: BPF prog-id=31 op=LOAD Jul 2 07:50:08.309000 audit: BPF prog-id=32 op=LOAD Jul 2 07:50:08.309000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:50:08.309000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:50:08.313000 audit: BPF prog-id=33 op=LOAD Jul 2 07:50:08.313000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:50:08.313000 audit: BPF prog-id=34 op=LOAD Jul 2 07:50:08.313000 audit: BPF prog-id=35 op=LOAD Jul 2 07:50:08.313000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:50:08.313000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:50:08.322044 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:50:08.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:08.335894 systemd[1]: Starting audit-rules.service... Jul 2 07:50:08.343957 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:50:08.354388 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:50:08.364237 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:50:08.372000 audit: BPF prog-id=36 op=LOAD Jul 2 07:50:08.375022 systemd[1]: Starting systemd-resolved.service... Jul 2 07:50:08.381000 audit: BPF prog-id=37 op=LOAD Jul 2 07:50:08.384034 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:50:08.393519 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:50:08.403462 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:50:08.402000 audit[1170]: SYSTEM_BOOT pid=1170 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:50:08.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:50:08.414521 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:50:08.414763 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:50:08.416393 augenrules[1173]: No rules Jul 2 07:50:08.414000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:50:08.414000 audit[1173]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd37c86f30 a2=420 a3=0 items=0 ppid=1145 pid=1173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:50:08.414000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:50:08.424133 systemd[1]: Finished audit-rules.service. Jul 2 07:50:08.432037 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:50:08.444459 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:08.444995 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.448018 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:08.456451 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:08.465494 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:08.472606 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.472937 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:08.475219 systemd[1]: Starting systemd-update-done.service... Jul 2 07:50:08.482521 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:50:08.482822 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:08.486345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:08.486592 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:08.495387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:08.495610 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:08.504369 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:08.504589 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:08.513310 systemd[1]: Finished systemd-update-done.service. Jul 2 07:50:08.522445 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:08.522730 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.527461 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:08.527902 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.533962 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:50:08.543036 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:50:08.552302 systemd[1]: Starting modprobe@loop.service... Jul 2 07:50:08.561441 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:50:08.562095 systemd-resolved[1161]: Positive Trust Anchors: Jul 2 07:50:08.562495 systemd-resolved[1161]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:50:08.562667 systemd-resolved[1161]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:50:08.567342 enable-oslogin[1187]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:50:08.569613 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.569839 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:08.570020 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:50:08.570171 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:50:08.571941 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:50:08.573640 systemd-timesyncd[1166]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jul 2 07:50:08.574073 systemd-timesyncd[1166]: Initial clock synchronization to Tue 2024-07-02 07:50:08.293721 UTC. Jul 2 07:50:08.581557 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:50:08.590186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:50:08.590386 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:50:08.598992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:50:08.599185 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:50:08.601732 systemd-resolved[1161]: Defaulting to hostname 'linux'. Jul 2 07:50:08.607854 systemd[1]: Started systemd-resolved.service. Jul 2 07:50:08.617027 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:50:08.617231 systemd[1]: Finished modprobe@loop.service. Jul 2 07:50:08.626054 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:50:08.626274 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:50:08.635258 systemd[1]: Reached target network.target. Jul 2 07:50:08.643682 systemd[1]: Reached target nss-lookup.target. Jul 2 07:50:08.652672 systemd[1]: Reached target time-set.target. Jul 2 07:50:08.660629 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:50:08.660869 systemd[1]: Reached target sysinit.target. Jul 2 07:50:08.669785 systemd[1]: Started motdgen.path. Jul 2 07:50:08.676824 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:50:08.686977 systemd[1]: Started logrotate.timer. Jul 2 07:50:08.693876 systemd[1]: Started mdadm.timer. Jul 2 07:50:08.700707 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:50:08.709648 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:50:08.709858 systemd[1]: Reached target paths.target. Jul 2 07:50:08.716657 systemd[1]: Reached target timers.target. Jul 2 07:50:08.724074 systemd[1]: Listening on dbus.socket. Jul 2 07:50:08.733162 systemd[1]: Starting docker.socket... Jul 2 07:50:08.744399 systemd[1]: Listening on sshd.socket. Jul 2 07:50:08.751812 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:08.752051 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.754980 systemd[1]: Listening on docker.socket. Jul 2 07:50:08.764053 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:50:08.764321 systemd[1]: Reached target sockets.target. Jul 2 07:50:08.772638 systemd[1]: Reached target basic.target. Jul 2 07:50:08.779621 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.779791 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:50:08.781565 systemd[1]: Starting containerd.service... Jul 2 07:50:08.790249 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:50:08.801382 systemd[1]: Starting dbus.service... Jul 2 07:50:08.810242 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:50:08.819115 systemd[1]: Starting extend-filesystems.service... Jul 2 07:50:08.826547 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:50:08.828618 systemd[1]: Starting modprobe@drm.service... Jul 2 07:50:08.834559 jq[1194]: false Jul 2 07:50:08.837204 systemd[1]: Starting motdgen.service... Jul 2 07:50:08.846268 systemd[1]: Starting prepare-helm.service... Jul 2 07:50:08.855332 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:50:08.864267 systemd[1]: Starting sshd-keygen.service... Jul 2 07:50:08.873466 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:50:08.881540 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:50:08.881790 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 2 07:50:08.884580 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:50:08.885913 systemd[1]: Starting update-engine.service... Jul 2 07:50:08.891038 systemd-networkd[1031]: eth0: Gained IPv6LL Jul 2 07:50:08.895662 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:50:08.901258 jq[1213]: true Jul 2 07:50:08.910244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:50:08.910526 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:50:08.911257 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:50:08.912521 systemd[1]: Finished modprobe@drm.service. Jul 2 07:50:08.913912 extend-filesystems[1196]: Found loop1 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda1 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda2 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda3 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found usr Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda4 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda6 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda7 Jul 2 07:50:08.913912 extend-filesystems[1196]: Found sda9 Jul 2 07:50:08.913912 extend-filesystems[1196]: Checking size of /dev/sda9 Jul 2 07:50:09.204592 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 2 07:50:09.204650 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 2 07:50:09.204715 update_engine[1210]: I0702 07:50:09.022533 1210 main.cc:92] Flatcar Update Engine starting Jul 2 07:50:09.204715 update_engine[1210]: I0702 07:50:09.034159 1210 update_check_scheduler.cc:74] Next update check in 7m7s Jul 2 07:50:08.965326 dbus-daemon[1193]: [system] SELinux support is enabled Jul 2 07:50:08.923113 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:50:09.205642 extend-filesystems[1196]: Resized partition /dev/sda9 Jul 2 07:50:08.967113 dbus-daemon[1193]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1031 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:50:08.923339 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:50:09.213826 extend-filesystems[1226]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:50:09.213826 extend-filesystems[1226]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 07:50:09.213826 extend-filesystems[1226]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 2 07:50:09.213826 extend-filesystems[1226]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 2 07:50:09.259586 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:50:09.259643 tar[1219]: linux-amd64/helm Jul 2 07:50:09.024152 dbus-daemon[1193]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:50:08.936285 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:50:09.260280 extend-filesystems[1196]: Resized filesystem in /dev/sda9 Jul 2 07:50:09.277584 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:50:09.277634 jq[1227]: true Jul 2 07:50:08.936522 systemd[1]: Finished motdgen.service. Jul 2 07:50:09.278137 env[1228]: time="2024-07-02T07:50:09.155004818Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:50:08.957224 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:50:08.967957 systemd[1]: Started dbus.service. Jul 2 07:50:09.278959 bash[1253]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:50:08.984933 systemd[1]: Finished ensure-sysext.service. Jul 2 07:50:09.020170 systemd[1]: Reached target network-online.target. Jul 2 07:50:09.031523 systemd[1]: Starting kubelet.service... Jul 2 07:50:09.052218 systemd[1]: Starting oem-gce.service... Jul 2 07:50:09.060537 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:50:09.280208 mkfs.ext4[1257]: mke2fs 1.46.5 (30-Dec-2021) Jul 2 07:50:09.280208 mkfs.ext4[1257]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Jul 2 07:50:09.280208 mkfs.ext4[1257]: Creating filesystem with 262144 4k blocks and 65536 inodes Jul 2 07:50:09.280208 mkfs.ext4[1257]: Filesystem UUID: b1508efa-6b78-4993-8df1-3f46c15b947f Jul 2 07:50:09.280208 mkfs.ext4[1257]: Superblock backups stored on blocks: Jul 2 07:50:09.280208 mkfs.ext4[1257]: 32768, 98304, 163840, 229376 Jul 2 07:50:09.280208 mkfs.ext4[1257]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:09.280208 mkfs.ext4[1257]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:09.280208 mkfs.ext4[1257]: Creating journal (8192 blocks): done Jul 2 07:50:09.280208 mkfs.ext4[1257]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:50:09.060596 systemd[1]: Reached target system-config.target. Jul 2 07:50:09.078646 systemd[1]: Starting systemd-logind.service... Jul 2 07:50:09.094580 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:50:09.094656 systemd[1]: Reached target user-config.target. Jul 2 07:50:09.115650 systemd[1]: Started update-engine.service. Jul 2 07:50:09.281858 umount[1264]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Jul 2 07:50:09.136071 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:50:09.136299 systemd[1]: Finished extend-filesystems.service. Jul 2 07:50:09.150461 systemd[1]: Started locksmithd.service. Jul 2 07:50:09.169624 systemd[1]: Starting systemd-hostnamed.service... Jul 2 07:50:09.178164 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:50:09.325132 env[1228]: time="2024-07-02T07:50:09.325041509Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:50:09.325298 env[1228]: time="2024-07-02T07:50:09.325268033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.330462 env[1228]: time="2024-07-02T07:50:09.330373921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:09.330571 env[1228]: time="2024-07-02T07:50:09.330462899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.331225 env[1228]: time="2024-07-02T07:50:09.331182245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:09.331225 env[1228]: time="2024-07-02T07:50:09.331222889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.331376 env[1228]: time="2024-07-02T07:50:09.331243329Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:50:09.331376 env[1228]: time="2024-07-02T07:50:09.331260099Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.331503 env[1228]: time="2024-07-02T07:50:09.331373332Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.344793 env[1228]: time="2024-07-02T07:50:09.344747588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:50:09.345176 env[1228]: time="2024-07-02T07:50:09.345147515Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:50:09.345305 env[1228]: time="2024-07-02T07:50:09.345285788Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:50:09.345522 env[1228]: time="2024-07-02T07:50:09.345481426Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:50:09.345663 env[1228]: time="2024-07-02T07:50:09.345642849Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:50:09.358063 coreos-metadata[1192]: Jul 02 07:50:09.358 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 2 07:50:09.368342 coreos-metadata[1192]: Jul 02 07:50:09.368 INFO Fetch failed with 404: resource not found Jul 2 07:50:09.368479 coreos-metadata[1192]: Jul 02 07:50:09.368 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.369 INFO Fetch successful Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.369 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.369 INFO Fetch failed with 404: resource not found Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.369 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.370 INFO Fetch failed with 404: resource not found Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.370 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 2 07:50:09.374823 coreos-metadata[1192]: Jul 02 07:50:09.371 INFO Fetch successful Jul 2 07:50:09.373565 unknown[1192]: wrote ssh authorized keys file for user: core Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375544319Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375614292Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375637601Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375705623Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375787427Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375836200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375860773Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375881194Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375918122Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375947863Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.375985516Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.376005788Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.376184760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:50:09.378519 env[1228]: time="2024-07-02T07:50:09.376326400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.376835872Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.376896861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.376918100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377023978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377045160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377134908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377154934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377177146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377213377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377233520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377251951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377287947Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377638565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377689837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379176 env[1228]: time="2024-07-02T07:50:09.377711856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.379859 env[1228]: time="2024-07-02T07:50:09.377749821Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:50:09.379859 env[1228]: time="2024-07-02T07:50:09.377776390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:50:09.379859 env[1228]: time="2024-07-02T07:50:09.377796563Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:50:09.379859 env[1228]: time="2024-07-02T07:50:09.377841297Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:50:09.379859 env[1228]: time="2024-07-02T07:50:09.377907591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:50:09.380122 env[1228]: time="2024-07-02T07:50:09.378298312Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:50:09.380122 env[1228]: time="2024-07-02T07:50:09.378396439Z" level=info msg="Connect containerd service" Jul 2 07:50:09.380122 env[1228]: time="2024-07-02T07:50:09.378459384Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381275771Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381408552Z" level=info msg="Start subscribing containerd event" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381481103Z" level=info msg="Start recovering state" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381606893Z" level=info msg="Start event monitor" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381630847Z" level=info msg="Start snapshots syncer" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381646628Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.381677594Z" level=info msg="Start streaming server" Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.382295405Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:50:09.385729 env[1228]: time="2024-07-02T07:50:09.382449059Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:50:09.383060 systemd[1]: Started containerd.service. Jul 2 07:50:09.395182 env[1228]: time="2024-07-02T07:50:09.395145698Z" level=info msg="containerd successfully booted in 0.240934s" Jul 2 07:50:09.407797 systemd-logind[1248]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:50:09.408506 systemd-logind[1248]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 07:50:09.408656 systemd-logind[1248]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:50:09.413237 systemd-logind[1248]: New seat seat0. Jul 2 07:50:09.422748 update-ssh-keys[1271]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:50:09.423761 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:50:09.434015 systemd[1]: Started systemd-logind.service. Jul 2 07:50:09.468712 dbus-daemon[1193]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:50:09.468893 systemd[1]: Started systemd-hostnamed.service. Jul 2 07:50:09.469976 dbus-daemon[1193]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1260 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:50:09.482127 systemd[1]: Starting polkit.service... Jul 2 07:50:09.589851 polkitd[1272]: Started polkitd version 121 Jul 2 07:50:09.627827 polkitd[1272]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:50:09.628092 polkitd[1272]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:50:09.635195 polkitd[1272]: Finished loading, compiling and executing 2 rules Jul 2 07:50:09.635921 dbus-daemon[1193]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:50:09.636122 systemd[1]: Started polkit.service. Jul 2 07:50:09.637165 polkitd[1272]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:50:09.680363 systemd-hostnamed[1260]: Hostname set to (transient) Jul 2 07:50:09.682737 systemd-resolved[1161]: System hostname changed to 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal'. Jul 2 07:50:09.959674 systemd[1]: Created slice system-sshd.slice. Jul 2 07:50:10.346566 tar[1219]: linux-amd64/LICENSE Jul 2 07:50:10.347123 tar[1219]: linux-amd64/README.md Jul 2 07:50:10.357533 systemd[1]: Finished prepare-helm.service. Jul 2 07:50:11.080014 systemd[1]: Started kubelet.service. Jul 2 07:50:12.266145 locksmithd[1258]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:50:12.588086 kubelet[1287]: E0702 07:50:12.587951 1287 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:12.591032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:12.591247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:12.591648 systemd[1]: kubelet.service: Consumed 1.471s CPU time. Jul 2 07:50:15.193055 sshd_keygen[1222]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:50:15.232772 systemd[1]: Finished sshd-keygen.service. Jul 2 07:50:15.244099 systemd[1]: Starting issuegen.service... Jul 2 07:50:15.252355 systemd[1]: Started sshd@0-10.128.0.103:22-147.75.109.163:39494.service. Jul 2 07:50:15.267902 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:50:15.268131 systemd[1]: Finished issuegen.service. Jul 2 07:50:15.278109 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:50:15.294776 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:50:15.304884 systemd[1]: Started getty@tty1.service. Jul 2 07:50:15.314527 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:50:15.323884 systemd[1]: Reached target getty.target. Jul 2 07:50:15.489038 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Jul 2 07:50:15.593918 sshd[1302]: Accepted publickey for core from 147.75.109.163 port 39494 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:15.598282 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:15.617783 systemd[1]: Created slice user-500.slice. Jul 2 07:50:15.626688 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:50:15.637315 systemd-logind[1248]: New session 1 of user core. Jul 2 07:50:15.648941 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:50:15.663409 systemd[1]: Starting user@500.service... Jul 2 07:50:15.690334 (systemd)[1311]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:15.880222 systemd[1311]: Queued start job for default target default.target. Jul 2 07:50:15.883054 systemd[1311]: Reached target paths.target. Jul 2 07:50:15.883370 systemd[1311]: Reached target sockets.target. Jul 2 07:50:15.883596 systemd[1311]: Reached target timers.target. Jul 2 07:50:15.883767 systemd[1311]: Reached target basic.target. Jul 2 07:50:15.884039 systemd[1]: Started user@500.service. Jul 2 07:50:15.884475 systemd[1311]: Reached target default.target. Jul 2 07:50:15.884669 systemd[1311]: Startup finished in 177ms. Jul 2 07:50:15.891752 systemd[1]: Started session-1.scope. Jul 2 07:50:16.121614 systemd[1]: Started sshd@1-10.128.0.103:22-147.75.109.163:51836.service. Jul 2 07:50:16.413901 sshd[1320]: Accepted publickey for core from 147.75.109.163 port 51836 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:16.415808 sshd[1320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:16.421785 systemd-logind[1248]: New session 2 of user core. Jul 2 07:50:16.422453 systemd[1]: Started session-2.scope. Jul 2 07:50:16.629791 sshd[1320]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:16.633983 systemd-logind[1248]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:50:16.634254 systemd[1]: sshd@1-10.128.0.103:22-147.75.109.163:51836.service: Deactivated successfully. Jul 2 07:50:16.635369 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:50:16.636411 systemd-logind[1248]: Removed session 2. Jul 2 07:50:16.674893 systemd[1]: Started sshd@2-10.128.0.103:22-147.75.109.163:51850.service. Jul 2 07:50:16.964703 sshd[1326]: Accepted publickey for core from 147.75.109.163 port 51850 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:16.966355 sshd[1326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:16.972443 systemd-logind[1248]: New session 3 of user core. Jul 2 07:50:16.973197 systemd[1]: Started session-3.scope. Jul 2 07:50:17.177804 sshd[1326]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:17.182440 systemd[1]: sshd@2-10.128.0.103:22-147.75.109.163:51850.service: Deactivated successfully. Jul 2 07:50:17.183566 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:50:17.184472 systemd-logind[1248]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:50:17.185821 systemd-logind[1248]: Removed session 3. Jul 2 07:50:17.542480 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:50:17.562003 systemd-nspawn[1331]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Jul 2 07:50:17.562003 systemd-nspawn[1331]: Press ^] three times within 1s to kill container. Jul 2 07:50:17.577447 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:50:17.653728 systemd[1]: Started oem-gce.service. Jul 2 07:50:17.660972 systemd[1]: Reached target multi-user.target. Jul 2 07:50:17.672627 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:50:17.685784 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:50:17.686025 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:50:17.695799 systemd[1]: Startup finished in 975ms (kernel) + 7.374s (initrd) + 16.527s (userspace) = 24.877s. Jul 2 07:50:17.716701 systemd-nspawn[1331]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 2 07:50:17.716701 systemd-nspawn[1331]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 2 07:50:17.716889 systemd-nspawn[1331]: + /usr/bin/google_instance_setup Jul 2 07:50:18.286395 instance-setup[1337]: INFO Running google_set_multiqueue. Jul 2 07:50:18.299777 instance-setup[1337]: INFO Set channels for eth0 to 2. Jul 2 07:50:18.303389 instance-setup[1337]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 2 07:50:18.304757 instance-setup[1337]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 2 07:50:18.305118 instance-setup[1337]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 2 07:50:18.306450 instance-setup[1337]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 2 07:50:18.306838 instance-setup[1337]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 2 07:50:18.308128 instance-setup[1337]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 2 07:50:18.308555 instance-setup[1337]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 2 07:50:18.309859 instance-setup[1337]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 2 07:50:18.320531 instance-setup[1337]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 2 07:50:18.320703 instance-setup[1337]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 2 07:50:18.357732 systemd-nspawn[1331]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 2 07:50:18.678815 startup-script[1368]: INFO Starting startup scripts. Jul 2 07:50:18.691254 startup-script[1368]: INFO No startup scripts found in metadata. Jul 2 07:50:18.691428 startup-script[1368]: INFO Finished running startup scripts. Jul 2 07:50:18.725051 systemd-nspawn[1331]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 2 07:50:18.725051 systemd-nspawn[1331]: + daemon_pids=() Jul 2 07:50:18.725882 systemd-nspawn[1331]: + for d in accounts clock_skew network Jul 2 07:50:18.725882 systemd-nspawn[1331]: + daemon_pids+=($!) Jul 2 07:50:18.725882 systemd-nspawn[1331]: + for d in accounts clock_skew network Jul 2 07:50:18.725882 systemd-nspawn[1331]: + daemon_pids+=($!) Jul 2 07:50:18.725882 systemd-nspawn[1331]: + for d in accounts clock_skew network Jul 2 07:50:18.726129 systemd-nspawn[1331]: + daemon_pids+=($!) Jul 2 07:50:18.726129 systemd-nspawn[1331]: + NOTIFY_SOCKET=/run/systemd/notify Jul 2 07:50:18.726129 systemd-nspawn[1331]: + /usr/bin/systemd-notify --ready Jul 2 07:50:18.726826 systemd-nspawn[1331]: + /usr/bin/google_clock_skew_daemon Jul 2 07:50:18.727018 systemd-nspawn[1331]: + /usr/bin/google_network_daemon Jul 2 07:50:18.728160 systemd-nspawn[1331]: + /usr/bin/google_accounts_daemon Jul 2 07:50:18.776164 systemd-nspawn[1331]: + wait -n 36 37 38 Jul 2 07:50:19.325692 google-clock-skew[1372]: INFO Starting Google Clock Skew daemon. Jul 2 07:50:19.337242 google-networking[1373]: INFO Starting Google Networking daemon. Jul 2 07:50:19.344702 google-clock-skew[1372]: INFO Clock drift token has changed: 0. Jul 2 07:50:19.351160 systemd-nspawn[1331]: hwclock: Cannot access the Hardware Clock via any known method. Jul 2 07:50:19.351279 systemd-nspawn[1331]: hwclock: Use the --verbose option to see the details of our search for an access method. Jul 2 07:50:19.352083 google-clock-skew[1372]: WARNING Failed to sync system time with hardware clock. Jul 2 07:50:19.465263 groupadd[1383]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 2 07:50:19.468197 groupadd[1383]: group added to /etc/gshadow: name=google-sudoers Jul 2 07:50:19.473711 groupadd[1383]: new group: name=google-sudoers, GID=1000 Jul 2 07:50:19.486072 google-accounts[1371]: INFO Starting Google Accounts daemon. Jul 2 07:50:19.510763 google-accounts[1371]: WARNING OS Login not installed. Jul 2 07:50:19.511708 google-accounts[1371]: INFO Creating a new user account for 0. Jul 2 07:50:19.516679 systemd-nspawn[1331]: useradd: invalid user name '0': use --badname to ignore Jul 2 07:50:19.517389 google-accounts[1371]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 2 07:50:22.606449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:50:22.606776 systemd[1]: Stopped kubelet.service. Jul 2 07:50:22.606864 systemd[1]: kubelet.service: Consumed 1.471s CPU time. Jul 2 07:50:22.609008 systemd[1]: Starting kubelet.service... Jul 2 07:50:22.892740 systemd[1]: Started kubelet.service. Jul 2 07:50:22.958784 kubelet[1397]: E0702 07:50:22.958727 1397 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:22.962915 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:22.963069 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:27.153526 systemd[1]: Started sshd@3-10.128.0.103:22-147.75.109.163:50616.service. Jul 2 07:50:27.442688 sshd[1404]: Accepted publickey for core from 147.75.109.163 port 50616 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:27.444381 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:27.450390 systemd-logind[1248]: New session 4 of user core. Jul 2 07:50:27.451124 systemd[1]: Started session-4.scope. Jul 2 07:50:27.656065 sshd[1404]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:27.660054 systemd[1]: sshd@3-10.128.0.103:22-147.75.109.163:50616.service: Deactivated successfully. Jul 2 07:50:27.661076 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:50:27.661979 systemd-logind[1248]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:50:27.663173 systemd-logind[1248]: Removed session 4. Jul 2 07:50:27.701067 systemd[1]: Started sshd@4-10.128.0.103:22-147.75.109.163:50624.service. Jul 2 07:50:27.988180 sshd[1410]: Accepted publickey for core from 147.75.109.163 port 50624 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:27.989888 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:27.996501 systemd-logind[1248]: New session 5 of user core. Jul 2 07:50:27.997546 systemd[1]: Started session-5.scope. Jul 2 07:50:28.193679 sshd[1410]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:28.197697 systemd[1]: sshd@4-10.128.0.103:22-147.75.109.163:50624.service: Deactivated successfully. Jul 2 07:50:28.198801 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:50:28.199766 systemd-logind[1248]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:50:28.200978 systemd-logind[1248]: Removed session 5. Jul 2 07:50:28.239052 systemd[1]: Started sshd@5-10.128.0.103:22-147.75.109.163:50634.service. Jul 2 07:50:28.527007 sshd[1416]: Accepted publickey for core from 147.75.109.163 port 50634 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:28.528935 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:28.535534 systemd-logind[1248]: New session 6 of user core. Jul 2 07:50:28.535889 systemd[1]: Started session-6.scope. Jul 2 07:50:28.741568 sshd[1416]: pam_unix(sshd:session): session closed for user core Jul 2 07:50:28.745800 systemd[1]: sshd@5-10.128.0.103:22-147.75.109.163:50634.service: Deactivated successfully. Jul 2 07:50:28.746883 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:50:28.747776 systemd-logind[1248]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:50:28.749097 systemd-logind[1248]: Removed session 6. Jul 2 07:50:28.788537 systemd[1]: Started sshd@6-10.128.0.103:22-147.75.109.163:50636.service. Jul 2 07:50:29.081928 sshd[1422]: Accepted publickey for core from 147.75.109.163 port 50636 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:50:29.083885 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:50:29.091221 systemd[1]: Started session-7.scope. Jul 2 07:50:29.091937 systemd-logind[1248]: New session 7 of user core. Jul 2 07:50:29.277799 sudo[1425]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:50:29.278315 sudo[1425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:50:29.308585 systemd[1]: Starting docker.service... Jul 2 07:50:29.355468 env[1435]: time="2024-07-02T07:50:29.355339194Z" level=info msg="Starting up" Jul 2 07:50:29.357312 env[1435]: time="2024-07-02T07:50:29.357258107Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:50:29.357312 env[1435]: time="2024-07-02T07:50:29.357282939Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:50:29.357514 env[1435]: time="2024-07-02T07:50:29.357315846Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:50:29.357514 env[1435]: time="2024-07-02T07:50:29.357333716Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:50:29.359487 env[1435]: time="2024-07-02T07:50:29.359402362Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:50:29.359487 env[1435]: time="2024-07-02T07:50:29.359440529Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:50:29.359487 env[1435]: time="2024-07-02T07:50:29.359463320Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:50:29.359487 env[1435]: time="2024-07-02T07:50:29.359487886Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:50:29.412379 env[1435]: time="2024-07-02T07:50:29.412295517Z" level=info msg="Loading containers: start." Jul 2 07:50:29.569468 kernel: Initializing XFRM netlink socket Jul 2 07:50:29.612392 env[1435]: time="2024-07-02T07:50:29.612335741Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:50:29.691053 systemd-networkd[1031]: docker0: Link UP Jul 2 07:50:29.705466 env[1435]: time="2024-07-02T07:50:29.705413736Z" level=info msg="Loading containers: done." Jul 2 07:50:29.722407 env[1435]: time="2024-07-02T07:50:29.722347319Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:50:29.722687 env[1435]: time="2024-07-02T07:50:29.722638041Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:50:29.722812 env[1435]: time="2024-07-02T07:50:29.722784116Z" level=info msg="Daemon has completed initialization" Jul 2 07:50:29.742811 systemd[1]: Started docker.service. Jul 2 07:50:29.756439 env[1435]: time="2024-07-02T07:50:29.756359664Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:50:30.855958 env[1228]: time="2024-07-02T07:50:30.855867653Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:50:33.106397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:50:33.106746 systemd[1]: Stopped kubelet.service. Jul 2 07:50:33.108983 systemd[1]: Starting kubelet.service... Jul 2 07:50:33.342454 systemd[1]: Started kubelet.service. Jul 2 07:50:33.408578 kubelet[1566]: E0702 07:50:33.408133 1566 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:33.411072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:33.411337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:39.693024 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:50:43.606605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:50:43.606936 systemd[1]: Stopped kubelet.service. Jul 2 07:50:43.609218 systemd[1]: Starting kubelet.service... Jul 2 07:50:43.848277 systemd[1]: Started kubelet.service. Jul 2 07:50:43.918023 kubelet[1579]: E0702 07:50:43.917855 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:43.920496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:43.920716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:50:44.711962 systemd[1]: Started sshd@7-10.128.0.103:22-110.53.126.241:43262.service. Jul 2 07:50:47.047157 sshd[1586]: Invalid user user from 110.53.126.241 port 43262 Jul 2 07:50:47.055069 sshd[1586]: Failed password for invalid user user from 110.53.126.241 port 43262 ssh2 Jul 2 07:50:47.236295 systemd[1]: Started sshd@8-10.128.0.103:22-182.43.235.218:55114.service. Jul 2 07:50:47.363223 sshd[1586]: Received disconnect from 110.53.126.241 port 43262:11: Bye Bye [preauth] Jul 2 07:50:47.363440 sshd[1586]: Disconnected from invalid user user 110.53.126.241 port 43262 [preauth] Jul 2 07:50:47.364881 systemd[1]: sshd@7-10.128.0.103:22-110.53.126.241:43262.service: Deactivated successfully. Jul 2 07:50:53.833071 update_engine[1210]: I0702 07:50:53.832979 1210 update_attempter.cc:509] Updating boot flags... Jul 2 07:50:53.930287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 07:50:53.930719 systemd[1]: Stopped kubelet.service. Jul 2 07:50:53.933640 systemd[1]: Starting kubelet.service... Jul 2 07:50:54.207341 systemd[1]: Started kubelet.service. Jul 2 07:50:54.264994 kubelet[1613]: E0702 07:50:54.264948 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:50:54.267036 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:50:54.267192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:51:00.860846 env[1228]: time="2024-07-02T07:51:00.860633319Z" level=info msg="trying next host" error="failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.28.11\": dial tcp 34.96.108.209:443: i/o timeout" host=registry.k8s.io Jul 2 07:51:00.863381 env[1228]: time="2024-07-02T07:51:00.863299224Z" level=error msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-apiserver:v1.28.11\": failed to resolve reference \"registry.k8s.io/kube-apiserver:v1.28.11\": failed to do request: Head \"https://registry.k8s.io/v2/kube-apiserver/manifests/v1.28.11\": dial tcp 34.96.108.209:443: i/o timeout" Jul 2 07:51:00.878628 env[1228]: time="2024-07-02T07:51:00.878578263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:51:01.376721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113573013.mount: Deactivated successfully. Jul 2 07:51:03.416448 env[1228]: time="2024-07-02T07:51:03.416369801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:03.418936 env[1228]: time="2024-07-02T07:51:03.418889041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:03.421328 env[1228]: time="2024-07-02T07:51:03.421287239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:03.423522 env[1228]: time="2024-07-02T07:51:03.423482513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:03.424582 env[1228]: time="2024-07-02T07:51:03.424539534Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:51:03.439166 env[1228]: time="2024-07-02T07:51:03.439124358Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:51:04.356633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 07:51:04.356956 systemd[1]: Stopped kubelet.service. Jul 2 07:51:04.359278 systemd[1]: Starting kubelet.service... Jul 2 07:51:04.587947 systemd[1]: Started kubelet.service. Jul 2 07:51:04.685273 kubelet[1636]: E0702 07:51:04.684683 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:51:04.687099 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:51:04.687321 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:51:05.385536 env[1228]: time="2024-07-02T07:51:05.385473360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:05.388249 env[1228]: time="2024-07-02T07:51:05.388195838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:05.390960 env[1228]: time="2024-07-02T07:51:05.390920854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:05.394174 env[1228]: time="2024-07-02T07:51:05.394136718Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:05.395711 env[1228]: time="2024-07-02T07:51:05.395675528Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:51:05.409355 env[1228]: time="2024-07-02T07:51:05.409318537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:51:06.571245 env[1228]: time="2024-07-02T07:51:06.571177909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:06.574794 env[1228]: time="2024-07-02T07:51:06.574732054Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:06.577487 env[1228]: time="2024-07-02T07:51:06.577446634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:06.581185 env[1228]: time="2024-07-02T07:51:06.581129421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:06.581709 env[1228]: time="2024-07-02T07:51:06.581654743Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:51:06.596343 env[1228]: time="2024-07-02T07:51:06.596285437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:51:07.880612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount936450634.mount: Deactivated successfully. Jul 2 07:51:08.532505 env[1228]: time="2024-07-02T07:51:08.532428817Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.535160 env[1228]: time="2024-07-02T07:51:08.535114599Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.537261 env[1228]: time="2024-07-02T07:51:08.537222304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.539083 env[1228]: time="2024-07-02T07:51:08.539045625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.539673 env[1228]: time="2024-07-02T07:51:08.539622006Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:51:08.553545 env[1228]: time="2024-07-02T07:51:08.553510397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:51:08.890480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296835019.mount: Deactivated successfully. Jul 2 07:51:08.898770 env[1228]: time="2024-07-02T07:51:08.898708818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.900894 env[1228]: time="2024-07-02T07:51:08.900849388Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.902899 env[1228]: time="2024-07-02T07:51:08.902859137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.904920 env[1228]: time="2024-07-02T07:51:08.904872343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:08.905713 env[1228]: time="2024-07-02T07:51:08.905664896Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:51:08.919090 env[1228]: time="2024-07-02T07:51:08.919051581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:51:09.264415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35563305.mount: Deactivated successfully. Jul 2 07:51:11.828823 env[1228]: time="2024-07-02T07:51:11.828751868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:11.831607 env[1228]: time="2024-07-02T07:51:11.831561800Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:11.834216 env[1228]: time="2024-07-02T07:51:11.834174259Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:11.836755 env[1228]: time="2024-07-02T07:51:11.836712753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:11.837781 env[1228]: time="2024-07-02T07:51:11.837727301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:51:11.852548 env[1228]: time="2024-07-02T07:51:11.852508702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:51:12.241798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928727483.mount: Deactivated successfully. Jul 2 07:51:13.694069 env[1228]: time="2024-07-02T07:51:13.694001076Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.696813 env[1228]: time="2024-07-02T07:51:13.696765309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.699044 env[1228]: time="2024-07-02T07:51:13.699004240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.700978 env[1228]: time="2024-07-02T07:51:13.700939188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:13.701717 env[1228]: time="2024-07-02T07:51:13.701665480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:51:14.856687 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 07:51:14.857000 systemd[1]: Stopped kubelet.service. Jul 2 07:51:14.862345 systemd[1]: Starting kubelet.service... Jul 2 07:51:15.073210 systemd[1]: Started kubelet.service. Jul 2 07:51:15.169700 kubelet[1735]: E0702 07:51:15.169549 1735 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:51:15.172800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:51:15.173008 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:51:16.956981 systemd[1]: Stopped kubelet.service. Jul 2 07:51:16.960700 systemd[1]: Starting kubelet.service... Jul 2 07:51:16.992873 systemd[1]: Reloading. Jul 2 07:51:17.133785 /usr/lib/systemd/system-generators/torcx-generator[1770]: time="2024-07-02T07:51:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:51:17.133833 /usr/lib/systemd/system-generators/torcx-generator[1770]: time="2024-07-02T07:51:17Z" level=info msg="torcx already run" Jul 2 07:51:17.243940 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:51:17.243968 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:51:17.267958 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:51:17.427924 systemd[1]: Started kubelet.service. Jul 2 07:51:17.432018 systemd[1]: Stopping kubelet.service... Jul 2 07:51:17.432751 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:51:17.433000 systemd[1]: Stopped kubelet.service. Jul 2 07:51:17.435120 systemd[1]: Starting kubelet.service... Jul 2 07:51:17.620798 systemd[1]: Started kubelet.service. Jul 2 07:51:17.690213 kubelet[1818]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:51:17.690745 kubelet[1818]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:51:17.690853 kubelet[1818]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:51:17.691179 kubelet[1818]: I0702 07:51:17.691110 1818 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:51:18.263896 kubelet[1818]: I0702 07:51:18.263845 1818 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:51:18.263896 kubelet[1818]: I0702 07:51:18.263881 1818 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:51:18.264266 kubelet[1818]: I0702 07:51:18.264228 1818 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:51:18.299716 kubelet[1818]: E0702 07:51:18.299685 1818 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.299933 kubelet[1818]: I0702 07:51:18.299870 1818 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:51:18.314143 kubelet[1818]: I0702 07:51:18.314099 1818 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:51:18.316937 kubelet[1818]: I0702 07:51:18.316896 1818 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:51:18.317206 kubelet[1818]: I0702 07:51:18.317160 1818 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:51:18.318473 kubelet[1818]: I0702 07:51:18.318436 1818 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:51:18.318473 kubelet[1818]: I0702 07:51:18.318471 1818 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:51:18.319732 kubelet[1818]: I0702 07:51:18.319697 1818 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:51:18.321658 kubelet[1818]: I0702 07:51:18.321633 1818 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:51:18.321781 kubelet[1818]: I0702 07:51:18.321666 1818 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:51:18.321781 kubelet[1818]: I0702 07:51:18.321707 1818 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:51:18.321781 kubelet[1818]: I0702 07:51:18.321730 1818 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:51:18.323753 kubelet[1818]: W0702 07:51:18.323690 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.323866 kubelet[1818]: E0702 07:51:18.323772 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.325285 kubelet[1818]: W0702 07:51:18.325231 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.325393 kubelet[1818]: E0702 07:51:18.325330 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.325689 kubelet[1818]: I0702 07:51:18.325663 1818 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:51:18.337335 kubelet[1818]: W0702 07:51:18.337295 1818 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:51:18.340467 kubelet[1818]: I0702 07:51:18.340440 1818 server.go:1232] "Started kubelet" Jul 2 07:51:18.350705 kubelet[1818]: E0702 07:51:18.350581 1818 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal.17de5602f14d111f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal", UID:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 51, 18, 340391199, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 51, 18, 340391199, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal"}': 'Post "https://10.128.0.103:6443/api/v1/namespaces/default/events": dial tcp 10.128.0.103:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:51:18.351445 kubelet[1818]: E0702 07:51:18.351410 1818 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:51:18.351599 kubelet[1818]: E0702 07:51:18.351585 1818 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:51:18.352311 kubelet[1818]: I0702 07:51:18.352296 1818 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:51:18.352796 kubelet[1818]: I0702 07:51:18.352779 1818 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:51:18.352958 kubelet[1818]: I0702 07:51:18.352945 1818 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:51:18.359699 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:51:18.359936 kubelet[1818]: I0702 07:51:18.359913 1818 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:51:18.360767 kubelet[1818]: I0702 07:51:18.360743 1818 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:51:18.364981 kubelet[1818]: I0702 07:51:18.364961 1818 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:51:18.365210 kubelet[1818]: I0702 07:51:18.365192 1818 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:51:18.365438 kubelet[1818]: I0702 07:51:18.365403 1818 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:51:18.366048 kubelet[1818]: W0702 07:51:18.365992 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.366204 kubelet[1818]: E0702 07:51:18.366187 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.367066 kubelet[1818]: E0702 07:51:18.367046 1818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="200ms" Jul 2 07:51:18.405868 kubelet[1818]: I0702 07:51:18.405842 1818 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:51:18.406051 kubelet[1818]: I0702 07:51:18.406035 1818 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:51:18.406167 kubelet[1818]: I0702 07:51:18.406154 1818 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:51:18.409645 kubelet[1818]: I0702 07:51:18.409605 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:51:18.412608 kubelet[1818]: I0702 07:51:18.412568 1818 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:51:18.412608 kubelet[1818]: I0702 07:51:18.412602 1818 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:51:18.412783 kubelet[1818]: I0702 07:51:18.412628 1818 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:51:18.412783 kubelet[1818]: E0702 07:51:18.412705 1818 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:51:18.413586 kubelet[1818]: W0702 07:51:18.413535 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.413775 kubelet[1818]: E0702 07:51:18.413757 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:18.433273 kubelet[1818]: I0702 07:51:18.432920 1818 policy_none.go:49] "None policy: Start" Jul 2 07:51:18.433937 kubelet[1818]: I0702 07:51:18.433908 1818 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:51:18.434044 kubelet[1818]: I0702 07:51:18.433942 1818 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:51:18.469848 kubelet[1818]: I0702 07:51:18.469823 1818 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.474321 kubelet[1818]: E0702 07:51:18.474300 1818 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.481076 systemd[1]: Created slice kubepods.slice. Jul 2 07:51:18.491814 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:51:18.495783 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:51:18.502312 kubelet[1818]: I0702 07:51:18.502289 1818 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:51:18.503301 kubelet[1818]: I0702 07:51:18.503280 1818 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:51:18.506099 kubelet[1818]: E0702 07:51:18.506009 1818 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" not found" Jul 2 07:51:18.513145 kubelet[1818]: I0702 07:51:18.513119 1818 topology_manager.go:215] "Topology Admit Handler" podUID="9473b6ce69c63bfca6c14315991ba082" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.519031 kubelet[1818]: I0702 07:51:18.518577 1818 topology_manager.go:215] "Topology Admit Handler" podUID="280c8910166e9fe301fa41a97ffc243e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.523331 kubelet[1818]: I0702 07:51:18.523295 1818 topology_manager.go:215] "Topology Admit Handler" podUID="d0f4554a8fff4e39a1867ed3645331b5" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.529400 systemd[1]: Created slice kubepods-burstable-pod9473b6ce69c63bfca6c14315991ba082.slice. Jul 2 07:51:18.547846 systemd[1]: Created slice kubepods-burstable-pod280c8910166e9fe301fa41a97ffc243e.slice. Jul 2 07:51:18.554324 systemd[1]: Created slice kubepods-burstable-podd0f4554a8fff4e39a1867ed3645331b5.slice. Jul 2 07:51:18.570922 kubelet[1818]: I0702 07:51:18.570883 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0f4554a8fff4e39a1867ed3645331b5-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"d0f4554a8fff4e39a1867ed3645331b5\") " pod="kube-system/kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571215 kubelet[1818]: I0702 07:51:18.571190 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571337 kubelet[1818]: I0702 07:51:18.571253 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571337 kubelet[1818]: I0702 07:51:18.571309 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571488 kubelet[1818]: I0702 07:51:18.571353 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571755 kubelet[1818]: I0702 07:51:18.571416 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571865 kubelet[1818]: I0702 07:51:18.571794 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571865 kubelet[1818]: I0702 07:51:18.571837 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.571985 kubelet[1818]: I0702 07:51:18.571872 1818 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.572442 kubelet[1818]: E0702 07:51:18.572391 1818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="400ms" Jul 2 07:51:18.680544 kubelet[1818]: I0702 07:51:18.680515 1818 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.681085 kubelet[1818]: E0702 07:51:18.681048 1818 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:18.845338 env[1228]: time="2024-07-02T07:51:18.844766616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:9473b6ce69c63bfca6c14315991ba082,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:18.853766 env[1228]: time="2024-07-02T07:51:18.853709512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:280c8910166e9fe301fa41a97ffc243e,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:18.857848 env[1228]: time="2024-07-02T07:51:18.857806494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:d0f4554a8fff4e39a1867ed3645331b5,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:18.973310 kubelet[1818]: E0702 07:51:18.973272 1818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="800ms" Jul 2 07:51:19.087430 kubelet[1818]: I0702 07:51:19.087374 1818 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:19.087805 kubelet[1818]: E0702 07:51:19.087780 1818 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:19.175978 kubelet[1818]: W0702 07:51:19.175895 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.175978 kubelet[1818]: E0702 07:51:19.175984 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.210729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188801841.mount: Deactivated successfully. Jul 2 07:51:19.215761 env[1228]: time="2024-07-02T07:51:19.215711689Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.216939 env[1228]: time="2024-07-02T07:51:19.216884379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.219849 env[1228]: time="2024-07-02T07:51:19.219793923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.221228 env[1228]: time="2024-07-02T07:51:19.221197102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.223206 env[1228]: time="2024-07-02T07:51:19.223171329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.225036 env[1228]: time="2024-07-02T07:51:19.224993239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.225945 env[1228]: time="2024-07-02T07:51:19.225895939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.226868 env[1228]: time="2024-07-02T07:51:19.226833167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.230180 env[1228]: time="2024-07-02T07:51:19.230132122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.231122 env[1228]: time="2024-07-02T07:51:19.231086882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.247132 env[1228]: time="2024-07-02T07:51:19.247082609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.271105 env[1228]: time="2024-07-02T07:51:19.266086982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:19.271105 env[1228]: time="2024-07-02T07:51:19.266133252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:19.271105 env[1228]: time="2024-07-02T07:51:19.266152934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:19.274889 env[1228]: time="2024-07-02T07:51:19.274835034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:19.276722 env[1228]: time="2024-07-02T07:51:19.275691156Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ad944d68ca19214cde976adf2f6d67e0a68a2d98f40c7e05a92b4aa37c47c55 pid=1856 runtime=io.containerd.runc.v2 Jul 2 07:51:19.302549 env[1228]: time="2024-07-02T07:51:19.302474873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:19.302784 env[1228]: time="2024-07-02T07:51:19.302524623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:19.302784 env[1228]: time="2024-07-02T07:51:19.302542723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:19.302979 env[1228]: time="2024-07-02T07:51:19.302757371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7c414f5f5f074255d6fd559a4dae18299baf7b866d71d9d6c38bdbc414a188f pid=1882 runtime=io.containerd.runc.v2 Jul 2 07:51:19.313191 systemd[1]: Started cri-containerd-7ad944d68ca19214cde976adf2f6d67e0a68a2d98f40c7e05a92b4aa37c47c55.scope. Jul 2 07:51:19.342381 env[1228]: time="2024-07-02T07:51:19.341031795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:19.342381 env[1228]: time="2024-07-02T07:51:19.341140429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:19.342381 env[1228]: time="2024-07-02T07:51:19.341187076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:19.342381 env[1228]: time="2024-07-02T07:51:19.341416389Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb17829f399fda8013304753e0adec55db30c69d18072f4999a08c1b61f2c526 pid=1903 runtime=io.containerd.runc.v2 Jul 2 07:51:19.361186 systemd[1]: Started cri-containerd-f7c414f5f5f074255d6fd559a4dae18299baf7b866d71d9d6c38bdbc414a188f.scope. Jul 2 07:51:19.376970 kubelet[1818]: W0702 07:51:19.376882 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.376970 kubelet[1818]: E0702 07:51:19.376963 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.392099 systemd[1]: Started cri-containerd-cb17829f399fda8013304753e0adec55db30c69d18072f4999a08c1b61f2c526.scope. Jul 2 07:51:19.426473 env[1228]: time="2024-07-02T07:51:19.426322755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:280c8910166e9fe301fa41a97ffc243e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ad944d68ca19214cde976adf2f6d67e0a68a2d98f40c7e05a92b4aa37c47c55\"" Jul 2 07:51:19.430033 kubelet[1818]: E0702 07:51:19.430000 1818 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flat" Jul 2 07:51:19.439815 env[1228]: time="2024-07-02T07:51:19.439536657Z" level=info msg="CreateContainer within sandbox \"7ad944d68ca19214cde976adf2f6d67e0a68a2d98f40c7e05a92b4aa37c47c55\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:51:19.459145 env[1228]: time="2024-07-02T07:51:19.459091513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:9473b6ce69c63bfca6c14315991ba082,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7c414f5f5f074255d6fd559a4dae18299baf7b866d71d9d6c38bdbc414a188f\"" Jul 2 07:51:19.461448 kubelet[1818]: E0702 07:51:19.461232 1818 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-21291" Jul 2 07:51:19.463633 env[1228]: time="2024-07-02T07:51:19.463569262Z" level=info msg="CreateContainer within sandbox \"f7c414f5f5f074255d6fd559a4dae18299baf7b866d71d9d6c38bdbc414a188f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:51:19.465613 env[1228]: time="2024-07-02T07:51:19.465563315Z" level=info msg="CreateContainer within sandbox \"7ad944d68ca19214cde976adf2f6d67e0a68a2d98f40c7e05a92b4aa37c47c55\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7110a44224b41a3e0c74e09f454186193a7cd26388fb6f3885955b856ac0386d\"" Jul 2 07:51:19.466241 env[1228]: time="2024-07-02T07:51:19.466206459Z" level=info msg="StartContainer for \"7110a44224b41a3e0c74e09f454186193a7cd26388fb6f3885955b856ac0386d\"" Jul 2 07:51:19.491630 env[1228]: time="2024-07-02T07:51:19.491581102Z" level=info msg="CreateContainer within sandbox \"f7c414f5f5f074255d6fd559a4dae18299baf7b866d71d9d6c38bdbc414a188f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"024c217066be117e5e56fae8c98a923676067560d9f65db9bb31efdb7b7b45af\"" Jul 2 07:51:19.493456 env[1228]: time="2024-07-02T07:51:19.493397086Z" level=info msg="StartContainer for \"024c217066be117e5e56fae8c98a923676067560d9f65db9bb31efdb7b7b45af\"" Jul 2 07:51:19.498590 systemd[1]: Started cri-containerd-7110a44224b41a3e0c74e09f454186193a7cd26388fb6f3885955b856ac0386d.scope. Jul 2 07:51:19.549200 systemd[1]: Started cri-containerd-024c217066be117e5e56fae8c98a923676067560d9f65db9bb31efdb7b7b45af.scope. Jul 2 07:51:19.583039 env[1228]: time="2024-07-02T07:51:19.582526803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal,Uid:d0f4554a8fff4e39a1867ed3645331b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb17829f399fda8013304753e0adec55db30c69d18072f4999a08c1b61f2c526\"" Jul 2 07:51:19.586139 kubelet[1818]: E0702 07:51:19.585284 1818 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-21291" Jul 2 07:51:19.589138 env[1228]: time="2024-07-02T07:51:19.589092985Z" level=info msg="CreateContainer within sandbox \"cb17829f399fda8013304753e0adec55db30c69d18072f4999a08c1b61f2c526\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:51:19.616919 env[1228]: time="2024-07-02T07:51:19.616615146Z" level=info msg="CreateContainer within sandbox \"cb17829f399fda8013304753e0adec55db30c69d18072f4999a08c1b61f2c526\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9ede5d95ec395349e161fd2f8b86a2ac7bb7c404ef14e04348171eeab3913b75\"" Jul 2 07:51:19.617341 env[1228]: time="2024-07-02T07:51:19.617302503Z" level=info msg="StartContainer for \"9ede5d95ec395349e161fd2f8b86a2ac7bb7c404ef14e04348171eeab3913b75\"" Jul 2 07:51:19.617718 env[1228]: time="2024-07-02T07:51:19.617678121Z" level=info msg="StartContainer for \"7110a44224b41a3e0c74e09f454186193a7cd26388fb6f3885955b856ac0386d\" returns successfully" Jul 2 07:51:19.651827 env[1228]: time="2024-07-02T07:51:19.651782721Z" level=info msg="StartContainer for \"024c217066be117e5e56fae8c98a923676067560d9f65db9bb31efdb7b7b45af\" returns successfully" Jul 2 07:51:19.668915 systemd[1]: Started cri-containerd-9ede5d95ec395349e161fd2f8b86a2ac7bb7c404ef14e04348171eeab3913b75.scope. Jul 2 07:51:19.769509 kubelet[1818]: W0702 07:51:19.769279 1818 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.769509 kubelet[1818]: E0702 07:51:19.769370 1818 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Jul 2 07:51:19.773950 kubelet[1818]: E0702 07:51:19.773889 1818 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="1.6s" Jul 2 07:51:19.829451 env[1228]: time="2024-07-02T07:51:19.829381046Z" level=info msg="StartContainer for \"9ede5d95ec395349e161fd2f8b86a2ac7bb7c404ef14e04348171eeab3913b75\" returns successfully" Jul 2 07:51:19.893045 kubelet[1818]: I0702 07:51:19.893019 1818 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:23.690787 kubelet[1818]: E0702 07:51:23.690750 1818 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" not found" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:23.727778 kubelet[1818]: I0702 07:51:23.727743 1818 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:23.801586 kubelet[1818]: E0702 07:51:23.801456 1818 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal.17de5602f14d111f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal", UID:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 51, 18, 340391199, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 51, 18, 340391199, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal"}': 'namespaces "default" not found' (will not retry!) Jul 2 07:51:24.327253 kubelet[1818]: I0702 07:51:24.327186 1818 apiserver.go:52] "Watching apiserver" Jul 2 07:51:24.365791 kubelet[1818]: I0702 07:51:24.365747 1818 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:51:24.459178 kubelet[1818]: E0702 07:51:24.459138 1818 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:26.252602 systemd[1]: Reloading. Jul 2 07:51:26.338256 /usr/lib/systemd/system-generators/torcx-generator[2106]: time="2024-07-02T07:51:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:51:26.342408 /usr/lib/systemd/system-generators/torcx-generator[2106]: time="2024-07-02T07:51:26Z" level=info msg="torcx already run" Jul 2 07:51:26.464067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:51:26.464095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:51:26.489274 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:51:26.642910 kubelet[1818]: I0702 07:51:26.641038 1818 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:51:26.641286 systemd[1]: Stopping kubelet.service... Jul 2 07:51:26.660206 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:51:26.660500 systemd[1]: Stopped kubelet.service. Jul 2 07:51:26.660574 systemd[1]: kubelet.service: Consumed 1.202s CPU time. Jul 2 07:51:26.662919 systemd[1]: Starting kubelet.service... Jul 2 07:51:26.849331 systemd[1]: Started kubelet.service. Jul 2 07:51:26.968793 kubelet[2153]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:51:26.968793 kubelet[2153]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:51:26.968793 kubelet[2153]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:51:26.970102 kubelet[2153]: I0702 07:51:26.970038 2153 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:51:26.985530 kubelet[2153]: I0702 07:51:26.985497 2153 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:51:26.985530 kubelet[2153]: I0702 07:51:26.985531 2153 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:51:26.985837 kubelet[2153]: I0702 07:51:26.985811 2153 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:51:26.992476 kubelet[2153]: I0702 07:51:26.992448 2153 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:51:26.994278 sudo[2165]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:51:26.995235 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:51:26.999285 kubelet[2153]: I0702 07:51:26.999259 2153 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:51:27.030279 kubelet[2153]: I0702 07:51:27.030253 2153 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:51:27.030783 kubelet[2153]: I0702 07:51:27.030756 2153 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:51:27.031208 kubelet[2153]: I0702 07:51:27.031185 2153 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:51:27.031449 kubelet[2153]: I0702 07:51:27.031431 2153 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:51:27.031594 kubelet[2153]: I0702 07:51:27.031578 2153 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:51:27.031742 kubelet[2153]: I0702 07:51:27.031728 2153 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:51:27.031985 kubelet[2153]: I0702 07:51:27.031970 2153 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:51:27.032111 kubelet[2153]: I0702 07:51:27.032097 2153 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:51:27.034180 kubelet[2153]: I0702 07:51:27.033513 2153 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:51:27.034180 kubelet[2153]: I0702 07:51:27.033547 2153 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:51:27.042935 kubelet[2153]: I0702 07:51:27.042908 2153 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:51:27.065390 kubelet[2153]: I0702 07:51:27.065360 2153 server.go:1232] "Started kubelet" Jul 2 07:51:27.065831 kubelet[2153]: I0702 07:51:27.065807 2153 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:51:27.067001 kubelet[2153]: I0702 07:51:27.066973 2153 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:51:27.067409 kubelet[2153]: I0702 07:51:27.067385 2153 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:51:27.077162 kubelet[2153]: I0702 07:51:27.069203 2153 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:51:27.079450 kubelet[2153]: I0702 07:51:27.077462 2153 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:51:27.079450 kubelet[2153]: I0702 07:51:27.077592 2153 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:51:27.079450 kubelet[2153]: I0702 07:51:27.078189 2153 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:51:27.079450 kubelet[2153]: I0702 07:51:27.078471 2153 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:51:27.083559 kubelet[2153]: E0702 07:51:27.083535 2153 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:51:27.083669 kubelet[2153]: E0702 07:51:27.083569 2153 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:51:27.132029 kubelet[2153]: I0702 07:51:27.131991 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:51:27.134033 kubelet[2153]: I0702 07:51:27.134001 2153 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:51:27.134233 kubelet[2153]: I0702 07:51:27.134216 2153 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:51:27.137933 kubelet[2153]: I0702 07:51:27.137854 2153 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:51:27.138325 kubelet[2153]: E0702 07:51:27.138306 2153 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:51:27.195625 kubelet[2153]: I0702 07:51:27.195596 2153 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.211379 kubelet[2153]: I0702 07:51:27.208632 2153 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.211379 kubelet[2153]: I0702 07:51:27.208753 2153 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.239142 kubelet[2153]: E0702 07:51:27.239038 2153 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:51:27.242062 kubelet[2153]: I0702 07:51:27.242039 2153 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:51:27.242276 kubelet[2153]: I0702 07:51:27.242259 2153 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:51:27.242397 kubelet[2153]: I0702 07:51:27.242383 2153 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:51:27.242739 kubelet[2153]: I0702 07:51:27.242725 2153 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:51:27.242894 kubelet[2153]: I0702 07:51:27.242879 2153 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:51:27.242998 kubelet[2153]: I0702 07:51:27.242985 2153 policy_none.go:49] "None policy: Start" Jul 2 07:51:27.244060 kubelet[2153]: I0702 07:51:27.244037 2153 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:51:27.244214 kubelet[2153]: I0702 07:51:27.244201 2153 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:51:27.244559 kubelet[2153]: I0702 07:51:27.244541 2153 state_mem.go:75] "Updated machine memory state" Jul 2 07:51:27.250254 kubelet[2153]: I0702 07:51:27.250202 2153 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:51:27.256579 kubelet[2153]: I0702 07:51:27.252801 2153 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:51:27.439194 kubelet[2153]: I0702 07:51:27.439157 2153 topology_manager.go:215] "Topology Admit Handler" podUID="9473b6ce69c63bfca6c14315991ba082" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.439575 kubelet[2153]: I0702 07:51:27.439547 2153 topology_manager.go:215] "Topology Admit Handler" podUID="280c8910166e9fe301fa41a97ffc243e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.439707 kubelet[2153]: I0702 07:51:27.439620 2153 topology_manager.go:215] "Topology Admit Handler" podUID="d0f4554a8fff4e39a1867ed3645331b5" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.445028 kubelet[2153]: W0702 07:51:27.444989 2153 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:51:27.446775 kubelet[2153]: W0702 07:51:27.446728 2153 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:51:27.447811 kubelet[2153]: W0702 07:51:27.447786 2153 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:51:27.480811 kubelet[2153]: I0702 07:51:27.480774 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d0f4554a8fff4e39a1867ed3645331b5-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"d0f4554a8fff4e39a1867ed3645331b5\") " pod="kube-system/kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.480951 kubelet[2153]: I0702 07:51:27.480832 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.480951 kubelet[2153]: I0702 07:51:27.480884 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.480951 kubelet[2153]: I0702 07:51:27.480936 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.481132 kubelet[2153]: I0702 07:51:27.480984 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.481132 kubelet[2153]: I0702 07:51:27.481054 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.481132 kubelet[2153]: I0702 07:51:27.481112 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9473b6ce69c63bfca6c14315991ba082-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"9473b6ce69c63bfca6c14315991ba082\") " pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.481321 kubelet[2153]: I0702 07:51:27.481150 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.481321 kubelet[2153]: I0702 07:51:27.481215 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/280c8910166e9fe301fa41a97ffc243e-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" (UID: \"280c8910166e9fe301fa41a97ffc243e\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:27.854181 sudo[2165]: pam_unix(sudo:session): session closed for user root Jul 2 07:51:28.047106 kubelet[2153]: I0702 07:51:28.047065 2153 apiserver.go:52] "Watching apiserver" Jul 2 07:51:28.078650 kubelet[2153]: I0702 07:51:28.078612 2153 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:51:28.192618 kubelet[2153]: W0702 07:51:28.192581 2153 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:51:28.192834 kubelet[2153]: E0702 07:51:28.192662 2153 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" Jul 2 07:51:28.213763 kubelet[2153]: I0702 07:51:28.213721 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" podStartSLOduration=1.213646303 podCreationTimestamp="2024-07-02 07:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:28.211486103 +0000 UTC m=+1.354681074" watchObservedRunningTime="2024-07-02 07:51:28.213646303 +0000 UTC m=+1.356841275" Jul 2 07:51:28.244480 kubelet[2153]: I0702 07:51:28.244441 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" podStartSLOduration=1.24437961 podCreationTimestamp="2024-07-02 07:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:28.223491863 +0000 UTC m=+1.366686836" watchObservedRunningTime="2024-07-02 07:51:28.24437961 +0000 UTC m=+1.387574580" Jul 2 07:51:28.244726 kubelet[2153]: I0702 07:51:28.244537 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" podStartSLOduration=1.244513184 podCreationTimestamp="2024-07-02 07:51:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:28.232191017 +0000 UTC m=+1.375385992" watchObservedRunningTime="2024-07-02 07:51:28.244513184 +0000 UTC m=+1.387708153" Jul 2 07:51:30.066918 sudo[1425]: pam_unix(sudo:session): session closed for user root Jul 2 07:51:30.110499 sshd[1422]: pam_unix(sshd:session): session closed for user core Jul 2 07:51:30.115001 systemd[1]: sshd@6-10.128.0.103:22-147.75.109.163:50636.service: Deactivated successfully. Jul 2 07:51:30.116168 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:51:30.116445 systemd[1]: session-7.scope: Consumed 6.185s CPU time. Jul 2 07:51:30.117228 systemd-logind[1248]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:51:30.118571 systemd-logind[1248]: Removed session 7. Jul 2 07:51:39.899822 kubelet[2153]: I0702 07:51:39.899712 2153 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:51:39.900561 env[1228]: time="2024-07-02T07:51:39.900315225Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:51:39.901015 kubelet[2153]: I0702 07:51:39.900643 2153 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:51:40.409793 kubelet[2153]: I0702 07:51:40.409751 2153 topology_manager.go:215] "Topology Admit Handler" podUID="3accd889-3458-4d8f-bc98-6ca68c27c9e3" podNamespace="kube-system" podName="kube-proxy-rbwdj" Jul 2 07:51:40.418202 systemd[1]: Created slice kubepods-besteffort-pod3accd889_3458_4d8f_bc98_6ca68c27c9e3.slice. Jul 2 07:51:40.442364 kubelet[2153]: I0702 07:51:40.442331 2153 topology_manager.go:215] "Topology Admit Handler" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" podNamespace="kube-system" podName="cilium-rpbbc" Jul 2 07:51:40.449756 systemd[1]: Created slice kubepods-burstable-pod4df866f6_ac33_4935_b6ea_7f3926fb754d.slice. Jul 2 07:51:40.461889 kubelet[2153]: I0702 07:51:40.461857 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-etc-cni-netd\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462070 kubelet[2153]: I0702 07:51:40.461936 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-bpf-maps\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462070 kubelet[2153]: I0702 07:51:40.461999 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-hostproc\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462070 kubelet[2153]: I0702 07:51:40.462035 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3accd889-3458-4d8f-bc98-6ca68c27c9e3-xtables-lock\") pod \"kube-proxy-rbwdj\" (UID: \"3accd889-3458-4d8f-bc98-6ca68c27c9e3\") " pod="kube-system/kube-proxy-rbwdj" Jul 2 07:51:40.462252 kubelet[2153]: I0702 07:51:40.462094 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-run\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462252 kubelet[2153]: I0702 07:51:40.462152 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-lib-modules\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462252 kubelet[2153]: I0702 07:51:40.462194 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3accd889-3458-4d8f-bc98-6ca68c27c9e3-kube-proxy\") pod \"kube-proxy-rbwdj\" (UID: \"3accd889-3458-4d8f-bc98-6ca68c27c9e3\") " pod="kube-system/kube-proxy-rbwdj" Jul 2 07:51:40.462252 kubelet[2153]: I0702 07:51:40.462250 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmzpq\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-kube-api-access-gmzpq\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462497 kubelet[2153]: I0702 07:51:40.462288 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xbv\" (UniqueName: \"kubernetes.io/projected/3accd889-3458-4d8f-bc98-6ca68c27c9e3-kube-api-access-l6xbv\") pod \"kube-proxy-rbwdj\" (UID: \"3accd889-3458-4d8f-bc98-6ca68c27c9e3\") " pod="kube-system/kube-proxy-rbwdj" Jul 2 07:51:40.462497 kubelet[2153]: I0702 07:51:40.462348 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cni-path\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462497 kubelet[2153]: I0702 07:51:40.462406 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4df866f6-ac33-4935-b6ea-7f3926fb754d-clustermesh-secrets\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462497 kubelet[2153]: I0702 07:51:40.462464 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-config-path\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462740 kubelet[2153]: I0702 07:51:40.462537 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-hubble-tls\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462740 kubelet[2153]: I0702 07:51:40.462613 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-cgroup\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462740 kubelet[2153]: I0702 07:51:40.462658 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-kernel\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462740 kubelet[2153]: I0702 07:51:40.462715 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-xtables-lock\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.462958 kubelet[2153]: I0702 07:51:40.462769 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3accd889-3458-4d8f-bc98-6ca68c27c9e3-lib-modules\") pod \"kube-proxy-rbwdj\" (UID: \"3accd889-3458-4d8f-bc98-6ca68c27c9e3\") " pod="kube-system/kube-proxy-rbwdj" Jul 2 07:51:40.462958 kubelet[2153]: I0702 07:51:40.462814 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-net\") pod \"cilium-rpbbc\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " pod="kube-system/cilium-rpbbc" Jul 2 07:51:40.731959 env[1228]: time="2024-07-02T07:51:40.731813386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbwdj,Uid:3accd889-3458-4d8f-bc98-6ca68c27c9e3,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:40.758257 env[1228]: time="2024-07-02T07:51:40.758203926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpbbc,Uid:4df866f6-ac33-4935-b6ea-7f3926fb754d,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:40.762609 env[1228]: time="2024-07-02T07:51:40.762493552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:40.762609 env[1228]: time="2024-07-02T07:51:40.762552977Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:40.762609 env[1228]: time="2024-07-02T07:51:40.762572267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:40.763094 env[1228]: time="2024-07-02T07:51:40.763018940Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc5b489cf6e9cd58667829a2308acb068474854d20202b2ae49091039e3de073 pid=2233 runtime=io.containerd.runc.v2 Jul 2 07:51:40.782466 systemd[1]: Started cri-containerd-dc5b489cf6e9cd58667829a2308acb068474854d20202b2ae49091039e3de073.scope. Jul 2 07:51:40.798893 env[1228]: time="2024-07-02T07:51:40.798810434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:40.799131 env[1228]: time="2024-07-02T07:51:40.799091721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:40.799331 env[1228]: time="2024-07-02T07:51:40.799280075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:40.799692 env[1228]: time="2024-07-02T07:51:40.799641692Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d pid=2260 runtime=io.containerd.runc.v2 Jul 2 07:51:40.835063 systemd[1]: Started cri-containerd-bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d.scope. Jul 2 07:51:40.840251 kubelet[2153]: I0702 07:51:40.840196 2153 topology_manager.go:215] "Topology Admit Handler" podUID="f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-7kkfz" Jul 2 07:51:40.848248 systemd[1]: Created slice kubepods-besteffort-podf6cb6dc5_21af_476d_a64b_55e4c7bb9dbc.slice. Jul 2 07:51:40.865189 kubelet[2153]: I0702 07:51:40.865151 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-7kkfz\" (UID: \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\") " pod="kube-system/cilium-operator-6bc8ccdb58-7kkfz" Jul 2 07:51:40.865373 kubelet[2153]: I0702 07:51:40.865210 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kb55\" (UniqueName: \"kubernetes.io/projected/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-kube-api-access-7kb55\") pod \"cilium-operator-6bc8ccdb58-7kkfz\" (UID: \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\") " pod="kube-system/cilium-operator-6bc8ccdb58-7kkfz" Jul 2 07:51:40.927687 env[1228]: time="2024-07-02T07:51:40.927625251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rbwdj,Uid:3accd889-3458-4d8f-bc98-6ca68c27c9e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc5b489cf6e9cd58667829a2308acb068474854d20202b2ae49091039e3de073\"" Jul 2 07:51:40.936470 env[1228]: time="2024-07-02T07:51:40.936404328Z" level=info msg="CreateContainer within sandbox \"dc5b489cf6e9cd58667829a2308acb068474854d20202b2ae49091039e3de073\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:51:40.980618 env[1228]: time="2024-07-02T07:51:40.980562148Z" level=info msg="CreateContainer within sandbox \"dc5b489cf6e9cd58667829a2308acb068474854d20202b2ae49091039e3de073\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b63bf31d6ed5d661de97aeede43da451c863bc5b470bc8d52953335c10ef6677\"" Jul 2 07:51:40.984453 env[1228]: time="2024-07-02T07:51:40.983682530Z" level=info msg="StartContainer for \"b63bf31d6ed5d661de97aeede43da451c863bc5b470bc8d52953335c10ef6677\"" Jul 2 07:51:40.986092 env[1228]: time="2024-07-02T07:51:40.986049669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rpbbc,Uid:4df866f6-ac33-4935-b6ea-7f3926fb754d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\"" Jul 2 07:51:40.992015 kubelet[2153]: E0702 07:51:40.991986 2153 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Jul 2 07:51:40.993028 env[1228]: time="2024-07-02T07:51:40.992990371Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:51:41.012643 systemd[1]: Started cri-containerd-b63bf31d6ed5d661de97aeede43da451c863bc5b470bc8d52953335c10ef6677.scope. Jul 2 07:51:41.066957 env[1228]: time="2024-07-02T07:51:41.066868726Z" level=info msg="StartContainer for \"b63bf31d6ed5d661de97aeede43da451c863bc5b470bc8d52953335c10ef6677\" returns successfully" Jul 2 07:51:41.154567 env[1228]: time="2024-07-02T07:51:41.154513792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7kkfz,Uid:f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:41.175603 env[1228]: time="2024-07-02T07:51:41.175505700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:51:41.175924 env[1228]: time="2024-07-02T07:51:41.175852291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:51:41.175924 env[1228]: time="2024-07-02T07:51:41.175902993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:51:41.176413 env[1228]: time="2024-07-02T07:51:41.176281306Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b pid=2380 runtime=io.containerd.runc.v2 Jul 2 07:51:41.195947 systemd[1]: Started cri-containerd-3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b.scope. Jul 2 07:51:41.222373 kubelet[2153]: I0702 07:51:41.221910 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rbwdj" podStartSLOduration=1.221858536 podCreationTimestamp="2024-07-02 07:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:51:41.221274806 +0000 UTC m=+14.364469794" watchObservedRunningTime="2024-07-02 07:51:41.221858536 +0000 UTC m=+14.365053503" Jul 2 07:51:41.274999 env[1228]: time="2024-07-02T07:51:41.274865928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7kkfz,Uid:f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\"" Jul 2 07:51:43.474241 systemd[1]: Started sshd@9-10.128.0.103:22-190.191.171.7:41890.service. Jul 2 07:51:44.423359 sshd[2505]: Invalid user ftpftp from 190.191.171.7 port 41890 Jul 2 07:51:44.434698 sshd[2505]: Failed password for invalid user ftpftp from 190.191.171.7 port 41890 ssh2 Jul 2 07:51:44.601450 sshd[2505]: Received disconnect from 190.191.171.7 port 41890:11: Bye Bye [preauth] Jul 2 07:51:44.602325 sshd[2505]: Disconnected from invalid user ftpftp 190.191.171.7 port 41890 [preauth] Jul 2 07:51:44.603712 systemd[1]: sshd@9-10.128.0.103:22-190.191.171.7:41890.service: Deactivated successfully. Jul 2 07:51:44.833719 systemd[1]: Started sshd@10-10.128.0.103:22-43.134.166.245:44916.service. Jul 2 07:51:45.993943 sshd[2511]: Failed password for root from 43.134.166.245 port 44916 ssh2 Jul 2 07:51:46.212846 sshd[2511]: Received disconnect from 43.134.166.245 port 44916:11: Bye Bye [preauth] Jul 2 07:51:46.212846 sshd[2511]: Disconnected from authenticating user root 43.134.166.245 port 44916 [preauth] Jul 2 07:51:46.214677 systemd[1]: sshd@10-10.128.0.103:22-43.134.166.245:44916.service: Deactivated successfully. Jul 2 07:51:46.290146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311544339.mount: Deactivated successfully. Jul 2 07:51:49.672837 env[1228]: time="2024-07-02T07:51:49.672777655Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:49.675716 env[1228]: time="2024-07-02T07:51:49.675651302Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:49.678225 env[1228]: time="2024-07-02T07:51:49.678184696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:49.679105 env[1228]: time="2024-07-02T07:51:49.679055708Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:51:49.682375 env[1228]: time="2024-07-02T07:51:49.681464220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:51:49.684290 env[1228]: time="2024-07-02T07:51:49.684245902Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:51:49.703126 env[1228]: time="2024-07-02T07:51:49.703072402Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\"" Jul 2 07:51:49.705502 env[1228]: time="2024-07-02T07:51:49.704006728Z" level=info msg="StartContainer for \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\"" Jul 2 07:51:49.732590 systemd[1]: Started cri-containerd-8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208.scope. Jul 2 07:51:49.781805 env[1228]: time="2024-07-02T07:51:49.781752495Z" level=info msg="StartContainer for \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\" returns successfully" Jul 2 07:51:49.797561 systemd[1]: cri-containerd-8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208.scope: Deactivated successfully. Jul 2 07:51:50.696362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208-rootfs.mount: Deactivated successfully. Jul 2 07:51:51.633630 env[1228]: time="2024-07-02T07:51:51.633546627Z" level=info msg="shim disconnected" id=8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208 Jul 2 07:51:51.633630 env[1228]: time="2024-07-02T07:51:51.633617767Z" level=warning msg="cleaning up after shim disconnected" id=8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208 namespace=k8s.io Jul 2 07:51:51.633630 env[1228]: time="2024-07-02T07:51:51.633636670Z" level=info msg="cleaning up dead shim" Jul 2 07:51:51.645321 env[1228]: time="2024-07-02T07:51:51.645269060Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2566 runtime=io.containerd.runc.v2\n" Jul 2 07:51:52.124291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068496429.mount: Deactivated successfully. Jul 2 07:51:52.272078 env[1228]: time="2024-07-02T07:51:52.272025882Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:51:52.307448 env[1228]: time="2024-07-02T07:51:52.305337796Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\"" Jul 2 07:51:52.306009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177387533.mount: Deactivated successfully. Jul 2 07:51:52.310936 env[1228]: time="2024-07-02T07:51:52.310261094Z" level=info msg="StartContainer for \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\"" Jul 2 07:51:52.348878 systemd[1]: Started cri-containerd-64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859.scope. Jul 2 07:51:52.402171 env[1228]: time="2024-07-02T07:51:52.401573612Z" level=info msg="StartContainer for \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\" returns successfully" Jul 2 07:51:52.423020 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:51:52.424764 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:51:52.425192 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:51:52.431734 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:51:52.432362 systemd[1]: cri-containerd-64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859.scope: Deactivated successfully. Jul 2 07:51:52.449385 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:51:52.499932 env[1228]: time="2024-07-02T07:51:52.499865690Z" level=info msg="shim disconnected" id=64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859 Jul 2 07:51:52.500278 env[1228]: time="2024-07-02T07:51:52.500248484Z" level=warning msg="cleaning up after shim disconnected" id=64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859 namespace=k8s.io Jul 2 07:51:52.500439 env[1228]: time="2024-07-02T07:51:52.500404468Z" level=info msg="cleaning up dead shim" Jul 2 07:51:52.528485 env[1228]: time="2024-07-02T07:51:52.528439325Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2632 runtime=io.containerd.runc.v2\n" Jul 2 07:51:53.161793 env[1228]: time="2024-07-02T07:51:53.161728046Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:53.164378 env[1228]: time="2024-07-02T07:51:53.164325617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:53.166794 env[1228]: time="2024-07-02T07:51:53.166741293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:51:53.167520 env[1228]: time="2024-07-02T07:51:53.167475077Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:51:53.171933 env[1228]: time="2024-07-02T07:51:53.171884859Z" level=info msg="CreateContainer within sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:51:53.190663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount246807169.mount: Deactivated successfully. Jul 2 07:51:53.197559 env[1228]: time="2024-07-02T07:51:53.197503130Z" level=info msg="CreateContainer within sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\"" Jul 2 07:51:53.199880 env[1228]: time="2024-07-02T07:51:53.198335959Z" level=info msg="StartContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\"" Jul 2 07:51:53.227507 systemd[1]: Started cri-containerd-e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c.scope. Jul 2 07:51:53.273088 env[1228]: time="2024-07-02T07:51:53.273033438Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:51:53.285633 env[1228]: time="2024-07-02T07:51:53.285581618Z" level=info msg="StartContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" returns successfully" Jul 2 07:51:53.319604 env[1228]: time="2024-07-02T07:51:53.319544169Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\"" Jul 2 07:51:53.320528 env[1228]: time="2024-07-02T07:51:53.320488917Z" level=info msg="StartContainer for \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\"" Jul 2 07:51:53.347548 systemd[1]: Started cri-containerd-454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9.scope. Jul 2 07:51:53.402514 env[1228]: time="2024-07-02T07:51:53.402463751Z" level=info msg="StartContainer for \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\" returns successfully" Jul 2 07:51:53.412865 systemd[1]: cri-containerd-454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9.scope: Deactivated successfully. Jul 2 07:51:53.580514 env[1228]: time="2024-07-02T07:51:53.580436471Z" level=info msg="shim disconnected" id=454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9 Jul 2 07:51:53.580514 env[1228]: time="2024-07-02T07:51:53.580503765Z" level=warning msg="cleaning up after shim disconnected" id=454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9 namespace=k8s.io Jul 2 07:51:53.580514 env[1228]: time="2024-07-02T07:51:53.580519911Z" level=info msg="cleaning up dead shim" Jul 2 07:51:53.597854 env[1228]: time="2024-07-02T07:51:53.597797070Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2732 runtime=io.containerd.runc.v2\n" Jul 2 07:51:54.301058 env[1228]: time="2024-07-02T07:51:54.300998091Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:51:54.323334 env[1228]: time="2024-07-02T07:51:54.323282159Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\"" Jul 2 07:51:54.324267 env[1228]: time="2024-07-02T07:51:54.324229417Z" level=info msg="StartContainer for \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\"" Jul 2 07:51:54.381645 systemd[1]: Started cri-containerd-195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf.scope. Jul 2 07:51:54.505940 env[1228]: time="2024-07-02T07:51:54.505886837Z" level=info msg="StartContainer for \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\" returns successfully" Jul 2 07:51:54.508898 systemd[1]: cri-containerd-195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf.scope: Deactivated successfully. Jul 2 07:51:54.552040 env[1228]: time="2024-07-02T07:51:54.551908672Z" level=info msg="shim disconnected" id=195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf Jul 2 07:51:54.552345 env[1228]: time="2024-07-02T07:51:54.552313334Z" level=warning msg="cleaning up after shim disconnected" id=195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf namespace=k8s.io Jul 2 07:51:54.552508 env[1228]: time="2024-07-02T07:51:54.552483099Z" level=info msg="cleaning up dead shim" Jul 2 07:51:54.564869 env[1228]: time="2024-07-02T07:51:54.564821159Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:51:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2789 runtime=io.containerd.runc.v2\n" Jul 2 07:51:55.097836 systemd[1]: run-containerd-runc-k8s.io-195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf-runc.ToiciY.mount: Deactivated successfully. Jul 2 07:51:55.097976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf-rootfs.mount: Deactivated successfully. Jul 2 07:51:55.313097 env[1228]: time="2024-07-02T07:51:55.313040558Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:51:55.343369 env[1228]: time="2024-07-02T07:51:55.343321513Z" level=info msg="CreateContainer within sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\"" Jul 2 07:51:55.344310 env[1228]: time="2024-07-02T07:51:55.344270030Z" level=info msg="StartContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\"" Jul 2 07:51:55.352207 kubelet[2153]: I0702 07:51:55.351933 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-7kkfz" podStartSLOduration=3.461233757 podCreationTimestamp="2024-07-02 07:51:40 +0000 UTC" firstStartedPulling="2024-07-02 07:51:41.27711792 +0000 UTC m=+14.420312880" lastFinishedPulling="2024-07-02 07:51:53.167764215 +0000 UTC m=+26.310959160" observedRunningTime="2024-07-02 07:51:54.405019034 +0000 UTC m=+27.548214004" watchObservedRunningTime="2024-07-02 07:51:55.351880037 +0000 UTC m=+28.495075006" Jul 2 07:51:55.384145 systemd[1]: Started cri-containerd-66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439.scope. Jul 2 07:51:55.438505 env[1228]: time="2024-07-02T07:51:55.438457312Z" level=info msg="StartContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" returns successfully" Jul 2 07:51:55.660837 kubelet[2153]: I0702 07:51:55.660798 2153 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:51:55.690736 kubelet[2153]: I0702 07:51:55.690675 2153 topology_manager.go:215] "Topology Admit Handler" podUID="093686ea-bbf5-45d6-a6f6-df501bc39fb5" podNamespace="kube-system" podName="coredns-5dd5756b68-wqrbm" Jul 2 07:51:55.699140 systemd[1]: Created slice kubepods-burstable-pod093686ea_bbf5_45d6_a6f6_df501bc39fb5.slice. Jul 2 07:51:55.713169 kubelet[2153]: I0702 07:51:55.713130 2153 topology_manager.go:215] "Topology Admit Handler" podUID="3bf4d8de-4e1e-4e27-add9-80edc2f868dd" podNamespace="kube-system" podName="coredns-5dd5756b68-lbkst" Jul 2 07:51:55.721134 systemd[1]: Created slice kubepods-burstable-pod3bf4d8de_4e1e_4e27_add9_80edc2f868dd.slice. Jul 2 07:51:55.886206 kubelet[2153]: I0702 07:51:55.886162 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/093686ea-bbf5-45d6-a6f6-df501bc39fb5-config-volume\") pod \"coredns-5dd5756b68-wqrbm\" (UID: \"093686ea-bbf5-45d6-a6f6-df501bc39fb5\") " pod="kube-system/coredns-5dd5756b68-wqrbm" Jul 2 07:51:55.886576 kubelet[2153]: I0702 07:51:55.886552 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7q4b\" (UniqueName: \"kubernetes.io/projected/093686ea-bbf5-45d6-a6f6-df501bc39fb5-kube-api-access-h7q4b\") pod \"coredns-5dd5756b68-wqrbm\" (UID: \"093686ea-bbf5-45d6-a6f6-df501bc39fb5\") " pod="kube-system/coredns-5dd5756b68-wqrbm" Jul 2 07:51:55.886808 kubelet[2153]: I0702 07:51:55.886776 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5jz\" (UniqueName: \"kubernetes.io/projected/3bf4d8de-4e1e-4e27-add9-80edc2f868dd-kube-api-access-jr5jz\") pod \"coredns-5dd5756b68-lbkst\" (UID: \"3bf4d8de-4e1e-4e27-add9-80edc2f868dd\") " pod="kube-system/coredns-5dd5756b68-lbkst" Jul 2 07:51:55.887022 kubelet[2153]: I0702 07:51:55.887004 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bf4d8de-4e1e-4e27-add9-80edc2f868dd-config-volume\") pod \"coredns-5dd5756b68-lbkst\" (UID: \"3bf4d8de-4e1e-4e27-add9-80edc2f868dd\") " pod="kube-system/coredns-5dd5756b68-lbkst" Jul 2 07:51:56.026722 env[1228]: time="2024-07-02T07:51:56.026596415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lbkst,Uid:3bf4d8de-4e1e-4e27-add9-80edc2f868dd,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:56.307438 env[1228]: time="2024-07-02T07:51:56.307282680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wqrbm,Uid:093686ea-bbf5-45d6-a6f6-df501bc39fb5,Namespace:kube-system,Attempt:0,}" Jul 2 07:51:57.840857 systemd-networkd[1031]: cilium_host: Link UP Jul 2 07:51:57.849597 systemd-networkd[1031]: cilium_net: Link UP Jul 2 07:51:57.859537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 07:51:57.868448 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:51:57.874590 systemd-networkd[1031]: cilium_net: Gained carrier Jul 2 07:51:57.875661 systemd-networkd[1031]: cilium_host: Gained carrier Jul 2 07:51:57.913642 systemd-networkd[1031]: cilium_net: Gained IPv6LL Jul 2 07:51:58.018392 systemd-networkd[1031]: cilium_vxlan: Link UP Jul 2 07:51:58.018407 systemd-networkd[1031]: cilium_vxlan: Gained carrier Jul 2 07:51:58.297455 kernel: NET: Registered PF_ALG protocol family Jul 2 07:51:58.907112 systemd-networkd[1031]: cilium_host: Gained IPv6LL Jul 2 07:51:59.113542 systemd-networkd[1031]: lxc_health: Link UP Jul 2 07:51:59.133476 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:51:59.134849 systemd-networkd[1031]: lxc_health: Gained carrier Jul 2 07:51:59.380032 systemd-networkd[1031]: lxc15b76bf0444d: Link UP Jul 2 07:51:59.392552 kernel: eth0: renamed from tmpfeefb Jul 2 07:51:59.410473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc15b76bf0444d: link becomes ready Jul 2 07:51:59.413022 systemd-networkd[1031]: lxc15b76bf0444d: Gained carrier Jul 2 07:51:59.420952 systemd-networkd[1031]: cilium_vxlan: Gained IPv6LL Jul 2 07:51:59.573760 systemd-networkd[1031]: lxc74feeb85c175: Link UP Jul 2 07:51:59.586467 kernel: eth0: renamed from tmp600f6 Jul 2 07:51:59.597459 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc74feeb85c175: link becomes ready Jul 2 07:51:59.598663 systemd-networkd[1031]: lxc74feeb85c175: Gained carrier Jul 2 07:52:00.250582 systemd-networkd[1031]: lxc_health: Gained IPv6LL Jul 2 07:52:00.698587 systemd-networkd[1031]: lxc15b76bf0444d: Gained IPv6LL Jul 2 07:52:00.795615 kubelet[2153]: I0702 07:52:00.795560 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rpbbc" podStartSLOduration=12.104195344 podCreationTimestamp="2024-07-02 07:51:40 +0000 UTC" firstStartedPulling="2024-07-02 07:51:40.988367478 +0000 UTC m=+14.131562439" lastFinishedPulling="2024-07-02 07:51:49.679660973 +0000 UTC m=+22.822855932" observedRunningTime="2024-07-02 07:51:56.352187324 +0000 UTC m=+29.495382291" watchObservedRunningTime="2024-07-02 07:52:00.795488837 +0000 UTC m=+33.938683807" Jul 2 07:52:01.594601 systemd-networkd[1031]: lxc74feeb85c175: Gained IPv6LL Jul 2 07:52:04.527457 env[1228]: time="2024-07-02T07:52:04.525663933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:04.527457 env[1228]: time="2024-07-02T07:52:04.525759384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:04.527457 env[1228]: time="2024-07-02T07:52:04.525800295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:04.527457 env[1228]: time="2024-07-02T07:52:04.526013818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/feefbaca19454d8294dd31ca47159a3d2bd0192a3f3e150c4f808e20b826e70c pid=3331 runtime=io.containerd.runc.v2 Jul 2 07:52:04.547339 env[1228]: time="2024-07-02T07:52:04.531140168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:52:04.547339 env[1228]: time="2024-07-02T07:52:04.531236858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:52:04.547339 env[1228]: time="2024-07-02T07:52:04.531308077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:52:04.547339 env[1228]: time="2024-07-02T07:52:04.537658035Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e pid=3342 runtime=io.containerd.runc.v2 Jul 2 07:52:04.572600 systemd[1]: Started cri-containerd-feefbaca19454d8294dd31ca47159a3d2bd0192a3f3e150c4f808e20b826e70c.scope. Jul 2 07:52:04.603580 systemd[1]: Started cri-containerd-600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e.scope. Jul 2 07:52:04.607751 systemd[1]: run-containerd-runc-k8s.io-600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e-runc.Lj8O0p.mount: Deactivated successfully. Jul 2 07:52:04.712015 env[1228]: time="2024-07-02T07:52:04.711948614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-wqrbm,Uid:093686ea-bbf5-45d6-a6f6-df501bc39fb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"feefbaca19454d8294dd31ca47159a3d2bd0192a3f3e150c4f808e20b826e70c\"" Jul 2 07:52:04.716474 env[1228]: time="2024-07-02T07:52:04.716403181Z" level=info msg="CreateContainer within sandbox \"feefbaca19454d8294dd31ca47159a3d2bd0192a3f3e150c4f808e20b826e70c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:52:04.738693 env[1228]: time="2024-07-02T07:52:04.738628848Z" level=info msg="CreateContainer within sandbox \"feefbaca19454d8294dd31ca47159a3d2bd0192a3f3e150c4f808e20b826e70c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89960267fbd3ec9dd01d38da6fe972cb5842bdf6f580eca89417c3f05614af8d\"" Jul 2 07:52:04.739677 env[1228]: time="2024-07-02T07:52:04.739630372Z" level=info msg="StartContainer for \"89960267fbd3ec9dd01d38da6fe972cb5842bdf6f580eca89417c3f05614af8d\"" Jul 2 07:52:04.774915 systemd[1]: Started cri-containerd-89960267fbd3ec9dd01d38da6fe972cb5842bdf6f580eca89417c3f05614af8d.scope. Jul 2 07:52:04.803284 env[1228]: time="2024-07-02T07:52:04.803223722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-lbkst,Uid:3bf4d8de-4e1e-4e27-add9-80edc2f868dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e\"" Jul 2 07:52:04.811281 env[1228]: time="2024-07-02T07:52:04.811220518Z" level=info msg="CreateContainer within sandbox \"600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:52:04.835254 env[1228]: time="2024-07-02T07:52:04.835202837Z" level=info msg="CreateContainer within sandbox \"600f6c36ba210f58565027a218d2b9fbaea170cc13e949d1cdb022436dc3792e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"32e1eb967cf4fd246fe7fb19a96656cb3cf5b47ef76f2d9244f4f971f5651532\"" Jul 2 07:52:04.839862 env[1228]: time="2024-07-02T07:52:04.839823232Z" level=info msg="StartContainer for \"32e1eb967cf4fd246fe7fb19a96656cb3cf5b47ef76f2d9244f4f971f5651532\"" Jul 2 07:52:04.863646 env[1228]: time="2024-07-02T07:52:04.863591367Z" level=info msg="StartContainer for \"89960267fbd3ec9dd01d38da6fe972cb5842bdf6f580eca89417c3f05614af8d\" returns successfully" Jul 2 07:52:04.880466 systemd[1]: Started cri-containerd-32e1eb967cf4fd246fe7fb19a96656cb3cf5b47ef76f2d9244f4f971f5651532.scope. Jul 2 07:52:04.941808 env[1228]: time="2024-07-02T07:52:04.941751371Z" level=info msg="StartContainer for \"32e1eb967cf4fd246fe7fb19a96656cb3cf5b47ef76f2d9244f4f971f5651532\" returns successfully" Jul 2 07:52:05.355174 kubelet[2153]: I0702 07:52:05.355137 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-lbkst" podStartSLOduration=25.355089411 podCreationTimestamp="2024-07-02 07:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:05.354725987 +0000 UTC m=+38.497920951" watchObservedRunningTime="2024-07-02 07:52:05.355089411 +0000 UTC m=+38.498284379" Jul 2 07:52:05.373412 kubelet[2153]: I0702 07:52:05.373352 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wqrbm" podStartSLOduration=25.373276946 podCreationTimestamp="2024-07-02 07:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:52:05.37291658 +0000 UTC m=+38.516111548" watchObservedRunningTime="2024-07-02 07:52:05.373276946 +0000 UTC m=+38.516471913" Jul 2 07:52:18.972678 systemd[1]: Started sshd@11-10.128.0.103:22-213.109.202.127:60144.service. Jul 2 07:52:20.102367 sshd[3491]: Invalid user butter from 213.109.202.127 port 60144 Jul 2 07:52:20.255614 sshd[3491]: Failed password for invalid user butter from 213.109.202.127 port 60144 ssh2 Jul 2 07:52:20.415683 sshd[3491]: Connection closed by invalid user butter 213.109.202.127 port 60144 [preauth] Jul 2 07:52:20.417458 systemd[1]: sshd@11-10.128.0.103:22-213.109.202.127:60144.service: Deactivated successfully. Jul 2 07:52:23.834110 systemd[1]: Started sshd@12-10.128.0.103:22-147.75.109.163:40746.service. Jul 2 07:52:24.128602 sshd[3495]: Accepted publickey for core from 147.75.109.163 port 40746 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:24.130692 sshd[3495]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:24.137976 systemd[1]: Started session-8.scope. Jul 2 07:52:24.139468 systemd-logind[1248]: New session 8 of user core. Jul 2 07:52:24.419646 sshd[3495]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:24.424494 systemd[1]: sshd@12-10.128.0.103:22-147.75.109.163:40746.service: Deactivated successfully. Jul 2 07:52:24.425698 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:52:24.426629 systemd-logind[1248]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:52:24.427981 systemd-logind[1248]: Removed session 8. Jul 2 07:52:29.467038 systemd[1]: Started sshd@13-10.128.0.103:22-147.75.109.163:40750.service. Jul 2 07:52:29.758510 sshd[3511]: Accepted publickey for core from 147.75.109.163 port 40750 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:29.761916 sshd[3511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:29.769150 systemd[1]: Started session-9.scope. Jul 2 07:52:29.769728 systemd-logind[1248]: New session 9 of user core. Jul 2 07:52:30.041347 sshd[3511]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:30.046260 systemd[1]: sshd@13-10.128.0.103:22-147.75.109.163:40750.service: Deactivated successfully. Jul 2 07:52:30.047402 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:52:30.048543 systemd-logind[1248]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:52:30.049787 systemd-logind[1248]: Removed session 9. Jul 2 07:52:35.089114 systemd[1]: Started sshd@14-10.128.0.103:22-147.75.109.163:53986.service. Jul 2 07:52:35.379258 sshd[3525]: Accepted publickey for core from 147.75.109.163 port 53986 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:35.381342 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:35.388471 systemd[1]: Started session-10.scope. Jul 2 07:52:35.389242 systemd-logind[1248]: New session 10 of user core. Jul 2 07:52:35.656974 sshd[3525]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:35.661586 systemd[1]: sshd@14-10.128.0.103:22-147.75.109.163:53986.service: Deactivated successfully. Jul 2 07:52:35.662815 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:52:35.663910 systemd-logind[1248]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:52:35.665275 systemd-logind[1248]: Removed session 10. Jul 2 07:52:40.704570 systemd[1]: Started sshd@15-10.128.0.103:22-147.75.109.163:53988.service. Jul 2 07:52:41.002814 sshd[3537]: Accepted publickey for core from 147.75.109.163 port 53988 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:41.005188 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:41.012061 systemd[1]: Started session-11.scope. Jul 2 07:52:41.012616 systemd-logind[1248]: New session 11 of user core. Jul 2 07:52:41.288167 sshd[3537]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:41.292663 systemd[1]: sshd@15-10.128.0.103:22-147.75.109.163:53988.service: Deactivated successfully. Jul 2 07:52:41.293903 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:52:41.294754 systemd-logind[1248]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:52:41.295959 systemd-logind[1248]: Removed session 11. Jul 2 07:52:41.335294 systemd[1]: Started sshd@16-10.128.0.103:22-147.75.109.163:53996.service. Jul 2 07:52:41.629225 sshd[3552]: Accepted publickey for core from 147.75.109.163 port 53996 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:41.631257 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:41.638638 systemd[1]: Started session-12.scope. Jul 2 07:52:41.639172 systemd-logind[1248]: New session 12 of user core. Jul 2 07:52:42.763564 sshd[3552]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:42.768146 systemd[1]: sshd@16-10.128.0.103:22-147.75.109.163:53996.service: Deactivated successfully. Jul 2 07:52:42.769350 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:52:42.770343 systemd-logind[1248]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:52:42.771621 systemd-logind[1248]: Removed session 12. Jul 2 07:52:42.809433 systemd[1]: Started sshd@17-10.128.0.103:22-147.75.109.163:46184.service. Jul 2 07:52:43.100453 sshd[3562]: Accepted publickey for core from 147.75.109.163 port 46184 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:43.101990 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:43.108543 systemd-logind[1248]: New session 13 of user core. Jul 2 07:52:43.109042 systemd[1]: Started session-13.scope. Jul 2 07:52:43.388604 sshd[3562]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:43.393016 systemd[1]: sshd@17-10.128.0.103:22-147.75.109.163:46184.service: Deactivated successfully. Jul 2 07:52:43.394222 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:52:43.395248 systemd-logind[1248]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:52:43.396602 systemd-logind[1248]: Removed session 13. Jul 2 07:52:47.248309 systemd[1]: sshd@8-10.128.0.103:22-182.43.235.218:55114.service: Deactivated successfully. Jul 2 07:52:48.435912 systemd[1]: Started sshd@18-10.128.0.103:22-147.75.109.163:46186.service. Jul 2 07:52:48.727280 sshd[3576]: Accepted publickey for core from 147.75.109.163 port 46186 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:48.729413 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:48.736520 systemd-logind[1248]: New session 14 of user core. Jul 2 07:52:48.736566 systemd[1]: Started session-14.scope. Jul 2 07:52:49.011547 sshd[3576]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:49.016237 systemd[1]: sshd@18-10.128.0.103:22-147.75.109.163:46186.service: Deactivated successfully. Jul 2 07:52:49.017471 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:52:49.018364 systemd-logind[1248]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:52:49.020003 systemd-logind[1248]: Removed session 14. Jul 2 07:52:54.060483 systemd[1]: Started sshd@19-10.128.0.103:22-147.75.109.163:47090.service. Jul 2 07:52:54.351886 sshd[3588]: Accepted publickey for core from 147.75.109.163 port 47090 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:54.353953 sshd[3588]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:54.361048 systemd[1]: Started session-15.scope. Jul 2 07:52:54.361691 systemd-logind[1248]: New session 15 of user core. Jul 2 07:52:54.644999 sshd[3588]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:54.649491 systemd[1]: sshd@19-10.128.0.103:22-147.75.109.163:47090.service: Deactivated successfully. Jul 2 07:52:54.650698 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:52:54.651717 systemd-logind[1248]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:52:54.653148 systemd-logind[1248]: Removed session 15. Jul 2 07:52:54.692016 systemd[1]: Started sshd@20-10.128.0.103:22-147.75.109.163:47096.service. Jul 2 07:52:54.984060 sshd[3600]: Accepted publickey for core from 147.75.109.163 port 47096 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:54.985587 sshd[3600]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:54.993465 systemd[1]: Started session-16.scope. Jul 2 07:52:54.994309 systemd-logind[1248]: New session 16 of user core. Jul 2 07:52:55.342072 sshd[3600]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:55.346807 systemd-logind[1248]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:52:55.347089 systemd[1]: sshd@20-10.128.0.103:22-147.75.109.163:47096.service: Deactivated successfully. Jul 2 07:52:55.348296 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:52:55.349558 systemd-logind[1248]: Removed session 16. Jul 2 07:52:55.388268 systemd[1]: Started sshd@21-10.128.0.103:22-147.75.109.163:47112.service. Jul 2 07:52:55.679550 sshd[3609]: Accepted publickey for core from 147.75.109.163 port 47112 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:55.681524 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:55.688611 systemd[1]: Started session-17.scope. Jul 2 07:52:55.689445 systemd-logind[1248]: New session 17 of user core. Jul 2 07:52:56.655996 sshd[3609]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:56.661232 systemd-logind[1248]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:52:56.661528 systemd[1]: sshd@21-10.128.0.103:22-147.75.109.163:47112.service: Deactivated successfully. Jul 2 07:52:56.662816 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:52:56.664103 systemd-logind[1248]: Removed session 17. Jul 2 07:52:56.702894 systemd[1]: Started sshd@22-10.128.0.103:22-147.75.109.163:47124.service. Jul 2 07:52:56.993345 sshd[3627]: Accepted publickey for core from 147.75.109.163 port 47124 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:56.995619 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:57.003198 systemd-logind[1248]: New session 18 of user core. Jul 2 07:52:57.004004 systemd[1]: Started session-18.scope. Jul 2 07:52:57.515890 sshd[3627]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:57.519780 systemd[1]: sshd@22-10.128.0.103:22-147.75.109.163:47124.service: Deactivated successfully. Jul 2 07:52:57.521362 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:52:57.521481 systemd-logind[1248]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:52:57.523260 systemd-logind[1248]: Removed session 18. Jul 2 07:52:57.563024 systemd[1]: Started sshd@23-10.128.0.103:22-147.75.109.163:47132.service. Jul 2 07:52:57.866250 sshd[3637]: Accepted publickey for core from 147.75.109.163 port 47132 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:52:57.868229 sshd[3637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:52:57.875009 systemd[1]: Started session-19.scope. Jul 2 07:52:57.875666 systemd-logind[1248]: New session 19 of user core. Jul 2 07:52:58.154651 sshd[3637]: pam_unix(sshd:session): session closed for user core Jul 2 07:52:58.159832 systemd[1]: sshd@23-10.128.0.103:22-147.75.109.163:47132.service: Deactivated successfully. Jul 2 07:52:58.161049 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:52:58.162150 systemd-logind[1248]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:52:58.163537 systemd-logind[1248]: Removed session 19. Jul 2 07:53:03.200606 systemd[1]: Started sshd@24-10.128.0.103:22-147.75.109.163:43940.service. Jul 2 07:53:03.496322 sshd[3652]: Accepted publickey for core from 147.75.109.163 port 43940 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:03.498314 sshd[3652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:03.505466 systemd[1]: Started session-20.scope. Jul 2 07:53:03.506067 systemd-logind[1248]: New session 20 of user core. Jul 2 07:53:03.776162 sshd[3652]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:03.781146 systemd[1]: sshd@24-10.128.0.103:22-147.75.109.163:43940.service: Deactivated successfully. Jul 2 07:53:03.782290 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:53:03.783241 systemd-logind[1248]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:53:03.785155 systemd-logind[1248]: Removed session 20. Jul 2 07:53:08.823883 systemd[1]: Started sshd@25-10.128.0.103:22-147.75.109.163:43944.service. Jul 2 07:53:09.116713 sshd[3666]: Accepted publickey for core from 147.75.109.163 port 43944 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:09.118921 sshd[3666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:09.126702 systemd[1]: Started session-21.scope. Jul 2 07:53:09.127449 systemd-logind[1248]: New session 21 of user core. Jul 2 07:53:09.399409 sshd[3666]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:09.404130 systemd[1]: sshd@25-10.128.0.103:22-147.75.109.163:43944.service: Deactivated successfully. Jul 2 07:53:09.405353 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:53:09.406279 systemd-logind[1248]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:53:09.407760 systemd-logind[1248]: Removed session 21. Jul 2 07:53:14.446932 systemd[1]: Started sshd@26-10.128.0.103:22-147.75.109.163:52166.service. Jul 2 07:53:14.740274 sshd[3681]: Accepted publickey for core from 147.75.109.163 port 52166 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:14.742434 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:14.749103 systemd-logind[1248]: New session 22 of user core. Jul 2 07:53:14.749626 systemd[1]: Started session-22.scope. Jul 2 07:53:15.031053 sshd[3681]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:15.035887 systemd[1]: sshd@26-10.128.0.103:22-147.75.109.163:52166.service: Deactivated successfully. Jul 2 07:53:15.037152 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:53:15.038243 systemd-logind[1248]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:53:15.039551 systemd-logind[1248]: Removed session 22. Jul 2 07:53:15.078798 systemd[1]: Started sshd@27-10.128.0.103:22-147.75.109.163:52172.service. Jul 2 07:53:15.377750 sshd[3693]: Accepted publickey for core from 147.75.109.163 port 52172 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:15.380115 sshd[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:15.387376 systemd[1]: Started session-23.scope. Jul 2 07:53:15.388019 systemd-logind[1248]: New session 23 of user core. Jul 2 07:53:17.550678 env[1228]: time="2024-07-02T07:53:17.550620091Z" level=info msg="StopContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" with timeout 30 (s)" Jul 2 07:53:17.553915 env[1228]: time="2024-07-02T07:53:17.553829740Z" level=info msg="Stop container \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" with signal terminated" Jul 2 07:53:17.565495 systemd[1]: run-containerd-runc-k8s.io-66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439-runc.CxN4x0.mount: Deactivated successfully. Jul 2 07:53:17.626196 env[1228]: time="2024-07-02T07:53:17.626120846Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:53:17.628730 systemd[1]: cri-containerd-e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c.scope: Deactivated successfully. Jul 2 07:53:17.645586 env[1228]: time="2024-07-02T07:53:17.645535447Z" level=info msg="StopContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" with timeout 2 (s)" Jul 2 07:53:17.646415 env[1228]: time="2024-07-02T07:53:17.646377544Z" level=info msg="Stop container \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" with signal terminated" Jul 2 07:53:17.656997 systemd-networkd[1031]: lxc_health: Link DOWN Jul 2 07:53:17.657009 systemd-networkd[1031]: lxc_health: Lost carrier Jul 2 07:53:17.666166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c-rootfs.mount: Deactivated successfully. Jul 2 07:53:17.683893 systemd[1]: cri-containerd-66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439.scope: Deactivated successfully. Jul 2 07:53:17.687292 systemd[1]: cri-containerd-66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439.scope: Consumed 9.543s CPU time. Jul 2 07:53:17.701079 env[1228]: time="2024-07-02T07:53:17.701018459Z" level=info msg="shim disconnected" id=e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c Jul 2 07:53:17.701406 env[1228]: time="2024-07-02T07:53:17.701369764Z" level=warning msg="cleaning up after shim disconnected" id=e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c namespace=k8s.io Jul 2 07:53:17.701406 env[1228]: time="2024-07-02T07:53:17.701401313Z" level=info msg="cleaning up dead shim" Jul 2 07:53:17.718928 env[1228]: time="2024-07-02T07:53:17.718858925Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\n" Jul 2 07:53:17.721305 env[1228]: time="2024-07-02T07:53:17.721262857Z" level=info msg="StopContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" returns successfully" Jul 2 07:53:17.722149 env[1228]: time="2024-07-02T07:53:17.722112395Z" level=info msg="StopPodSandbox for \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\"" Jul 2 07:53:17.722281 env[1228]: time="2024-07-02T07:53:17.722194604Z" level=info msg="Container to stop \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.725413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b-shm.mount: Deactivated successfully. Jul 2 07:53:17.731096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439-rootfs.mount: Deactivated successfully. Jul 2 07:53:17.742803 systemd[1]: cri-containerd-3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b.scope: Deactivated successfully. Jul 2 07:53:17.744944 env[1228]: time="2024-07-02T07:53:17.744886443Z" level=info msg="shim disconnected" id=66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439 Jul 2 07:53:17.745103 env[1228]: time="2024-07-02T07:53:17.744947969Z" level=warning msg="cleaning up after shim disconnected" id=66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439 namespace=k8s.io Jul 2 07:53:17.745103 env[1228]: time="2024-07-02T07:53:17.744963412Z" level=info msg="cleaning up dead shim" Jul 2 07:53:17.760712 env[1228]: time="2024-07-02T07:53:17.760647061Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3782 runtime=io.containerd.runc.v2\n" Jul 2 07:53:17.763168 env[1228]: time="2024-07-02T07:53:17.763113823Z" level=info msg="StopContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" returns successfully" Jul 2 07:53:17.763732 env[1228]: time="2024-07-02T07:53:17.763692788Z" level=info msg="StopPodSandbox for \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\"" Jul 2 07:53:17.763841 env[1228]: time="2024-07-02T07:53:17.763779502Z" level=info msg="Container to stop \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.763909 env[1228]: time="2024-07-02T07:53:17.763804999Z" level=info msg="Container to stop \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.763972 env[1228]: time="2024-07-02T07:53:17.763913291Z" level=info msg="Container to stop \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.763972 env[1228]: time="2024-07-02T07:53:17.763951705Z" level=info msg="Container to stop \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.764243 env[1228]: time="2024-07-02T07:53:17.763975559Z" level=info msg="Container to stop \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:53:17.776396 systemd[1]: cri-containerd-bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d.scope: Deactivated successfully. Jul 2 07:53:17.801023 env[1228]: time="2024-07-02T07:53:17.800858134Z" level=info msg="shim disconnected" id=3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b Jul 2 07:53:17.801404 env[1228]: time="2024-07-02T07:53:17.801372851Z" level=warning msg="cleaning up after shim disconnected" id=3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b namespace=k8s.io Jul 2 07:53:17.802062 env[1228]: time="2024-07-02T07:53:17.802035519Z" level=info msg="cleaning up dead shim" Jul 2 07:53:17.818195 env[1228]: time="2024-07-02T07:53:17.818108373Z" level=info msg="shim disconnected" id=bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d Jul 2 07:53:17.819189 env[1228]: time="2024-07-02T07:53:17.819156030Z" level=warning msg="cleaning up after shim disconnected" id=bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d namespace=k8s.io Jul 2 07:53:17.819395 env[1228]: time="2024-07-02T07:53:17.819370346Z" level=info msg="cleaning up dead shim" Jul 2 07:53:17.819724 env[1228]: time="2024-07-02T07:53:17.819578870Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3822 runtime=io.containerd.runc.v2\n" Jul 2 07:53:17.820143 env[1228]: time="2024-07-02T07:53:17.820102897Z" level=info msg="TearDown network for sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" successfully" Jul 2 07:53:17.820242 env[1228]: time="2024-07-02T07:53:17.820142351Z" level=info msg="StopPodSandbox for \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" returns successfully" Jul 2 07:53:17.837659 env[1228]: time="2024-07-02T07:53:17.837619103Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3839 runtime=io.containerd.runc.v2\n" Jul 2 07:53:17.840219 env[1228]: time="2024-07-02T07:53:17.840179270Z" level=info msg="TearDown network for sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" successfully" Jul 2 07:53:17.840359 env[1228]: time="2024-07-02T07:53:17.840217927Z" level=info msg="StopPodSandbox for \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" returns successfully" Jul 2 07:53:17.882993 kubelet[2153]: I0702 07:53:17.882949 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-hostproc\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.883798 kubelet[2153]: I0702 07:53:17.883762 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-run\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.883927 kubelet[2153]: I0702 07:53:17.883820 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-xtables-lock\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.883927 kubelet[2153]: I0702 07:53:17.883868 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-cilium-config-path\") pod \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\" (UID: \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\") " Jul 2 07:53:17.883927 kubelet[2153]: I0702 07:53:17.883909 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4df866f6-ac33-4935-b6ea-7f3926fb754d-clustermesh-secrets\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.883941 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cni-path\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.883977 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-net\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.884014 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-hubble-tls\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.884049 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-cgroup\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.884087 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kb55\" (UniqueName: \"kubernetes.io/projected/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-kube-api-access-7kb55\") pod \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\" (UID: \"f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc\") " Jul 2 07:53:17.884130 kubelet[2153]: I0702 07:53:17.884123 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-lib-modules\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884565 kubelet[2153]: I0702 07:53:17.884158 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-kernel\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884565 kubelet[2153]: I0702 07:53:17.884199 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-config-path\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884565 kubelet[2153]: I0702 07:53:17.884232 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-etc-cni-netd\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884565 kubelet[2153]: I0702 07:53:17.884264 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-bpf-maps\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.884565 kubelet[2153]: I0702 07:53:17.884302 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmzpq\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-kube-api-access-gmzpq\") pod \"4df866f6-ac33-4935-b6ea-7f3926fb754d\" (UID: \"4df866f6-ac33-4935-b6ea-7f3926fb754d\") " Jul 2 07:53:17.885495 kubelet[2153]: I0702 07:53:17.885454 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.886032 kubelet[2153]: I0702 07:53:17.883029 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-hostproc" (OuterVolumeSpecName: "hostproc") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.886290 kubelet[2153]: I0702 07:53:17.886243 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.893871 kubelet[2153]: I0702 07:53:17.886469 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.893871 kubelet[2153]: I0702 07:53:17.889920 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" (UID: "f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:53:17.894066 kubelet[2153]: I0702 07:53:17.893802 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cni-path" (OuterVolumeSpecName: "cni-path") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.894066 kubelet[2153]: I0702 07:53:17.893827 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.894066 kubelet[2153]: I0702 07:53:17.893994 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-kube-api-access-gmzpq" (OuterVolumeSpecName: "kube-api-access-gmzpq") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "kube-api-access-gmzpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:17.896589 kubelet[2153]: I0702 07:53:17.896555 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.896752 kubelet[2153]: I0702 07:53:17.896602 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.896752 kubelet[2153]: I0702 07:53:17.896714 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4df866f6-ac33-4935-b6ea-7f3926fb754d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:53:17.896908 kubelet[2153]: I0702 07:53:17.896760 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.896908 kubelet[2153]: I0702 07:53:17.896788 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:17.897410 kubelet[2153]: I0702 07:53:17.897382 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:53:17.897739 kubelet[2153]: I0702 07:53:17.897681 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4df866f6-ac33-4935-b6ea-7f3926fb754d" (UID: "4df866f6-ac33-4935-b6ea-7f3926fb754d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:17.900946 kubelet[2153]: I0702 07:53:17.900897 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-kube-api-access-7kb55" (OuterVolumeSpecName: "kube-api-access-7kb55") pod "f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" (UID: "f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc"). InnerVolumeSpecName "kube-api-access-7kb55". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:17.985364 kubelet[2153]: I0702 07:53:17.985307 2153 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-kernel\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985364 kubelet[2153]: I0702 07:53:17.985355 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-config-path\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985364 kubelet[2153]: I0702 07:53:17.985377 2153 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-etc-cni-netd\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985397 2153 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-bpf-maps\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985432 2153 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gmzpq\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-kube-api-access-gmzpq\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985451 2153 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-hostproc\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985468 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-run\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985487 2153 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-xtables-lock\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985518 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-cilium-config-path\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.985739 kubelet[2153]: I0702 07:53:17.985537 2153 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-host-proc-sys-net\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985554 2153 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4df866f6-ac33-4935-b6ea-7f3926fb754d-clustermesh-secrets\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985572 2153 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cni-path\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985589 2153 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4df866f6-ac33-4935-b6ea-7f3926fb754d-hubble-tls\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985605 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-cilium-cgroup\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985624 2153 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7kb55\" (UniqueName: \"kubernetes.io/projected/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc-kube-api-access-7kb55\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:17.986018 kubelet[2153]: I0702 07:53:17.985642 2153 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4df866f6-ac33-4935-b6ea-7f3926fb754d-lib-modules\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:18.525329 kubelet[2153]: I0702 07:53:18.525276 2153 scope.go:117] "RemoveContainer" containerID="66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439" Jul 2 07:53:18.529041 env[1228]: time="2024-07-02T07:53:18.528991735Z" level=info msg="RemoveContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\"" Jul 2 07:53:18.533605 systemd[1]: Removed slice kubepods-burstable-pod4df866f6_ac33_4935_b6ea_7f3926fb754d.slice. Jul 2 07:53:18.533855 systemd[1]: kubepods-burstable-pod4df866f6_ac33_4935_b6ea_7f3926fb754d.slice: Consumed 9.706s CPU time. Jul 2 07:53:18.539067 systemd[1]: Removed slice kubepods-besteffort-podf6cb6dc5_21af_476d_a64b_55e4c7bb9dbc.slice. Jul 2 07:53:18.542201 env[1228]: time="2024-07-02T07:53:18.542141390Z" level=info msg="RemoveContainer for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" returns successfully" Jul 2 07:53:18.543535 kubelet[2153]: I0702 07:53:18.543354 2153 scope.go:117] "RemoveContainer" containerID="195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf" Jul 2 07:53:18.546681 env[1228]: time="2024-07-02T07:53:18.546642243Z" level=info msg="RemoveContainer for \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\"" Jul 2 07:53:18.552926 env[1228]: time="2024-07-02T07:53:18.552701886Z" level=info msg="RemoveContainer for \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\" returns successfully" Jul 2 07:53:18.555489 kubelet[2153]: I0702 07:53:18.554491 2153 scope.go:117] "RemoveContainer" containerID="454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9" Jul 2 07:53:18.558974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b-rootfs.mount: Deactivated successfully. Jul 2 07:53:18.559120 systemd[1]: var-lib-kubelet-pods-f6cb6dc5\x2d21af\x2d476d\x2da64b\x2d55e4c7bb9dbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7kb55.mount: Deactivated successfully. Jul 2 07:53:18.559236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d-rootfs.mount: Deactivated successfully. Jul 2 07:53:18.559332 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d-shm.mount: Deactivated successfully. Jul 2 07:53:18.559460 systemd[1]: var-lib-kubelet-pods-4df866f6\x2dac33\x2d4935\x2db6ea\x2d7f3926fb754d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmzpq.mount: Deactivated successfully. Jul 2 07:53:18.559571 systemd[1]: var-lib-kubelet-pods-4df866f6\x2dac33\x2d4935\x2db6ea\x2d7f3926fb754d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:53:18.559674 systemd[1]: var-lib-kubelet-pods-4df866f6\x2dac33\x2d4935\x2db6ea\x2d7f3926fb754d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:53:18.574040 env[1228]: time="2024-07-02T07:53:18.573994948Z" level=info msg="RemoveContainer for \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\"" Jul 2 07:53:18.578535 env[1228]: time="2024-07-02T07:53:18.578493817Z" level=info msg="RemoveContainer for \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\" returns successfully" Jul 2 07:53:18.578725 kubelet[2153]: I0702 07:53:18.578691 2153 scope.go:117] "RemoveContainer" containerID="64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859" Jul 2 07:53:18.579992 env[1228]: time="2024-07-02T07:53:18.579957458Z" level=info msg="RemoveContainer for \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\"" Jul 2 07:53:18.583368 env[1228]: time="2024-07-02T07:53:18.583317028Z" level=info msg="RemoveContainer for \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\" returns successfully" Jul 2 07:53:18.583605 kubelet[2153]: I0702 07:53:18.583557 2153 scope.go:117] "RemoveContainer" containerID="8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208" Jul 2 07:53:18.585049 env[1228]: time="2024-07-02T07:53:18.585000532Z" level=info msg="RemoveContainer for \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\"" Jul 2 07:53:18.588917 env[1228]: time="2024-07-02T07:53:18.588879676Z" level=info msg="RemoveContainer for \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\" returns successfully" Jul 2 07:53:18.589081 kubelet[2153]: I0702 07:53:18.589058 2153 scope.go:117] "RemoveContainer" containerID="66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439" Jul 2 07:53:18.589457 env[1228]: time="2024-07-02T07:53:18.589349436Z" level=error msg="ContainerStatus for \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\": not found" Jul 2 07:53:18.589614 kubelet[2153]: E0702 07:53:18.589592 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\": not found" containerID="66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439" Jul 2 07:53:18.589735 kubelet[2153]: I0702 07:53:18.589716 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439"} err="failed to get container status \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\": rpc error: code = NotFound desc = an error occurred when try to find container \"66276d93f2a81d823152c45b8521f430fdf87dd82cf9a4a836418eedd9177439\": not found" Jul 2 07:53:18.589837 kubelet[2153]: I0702 07:53:18.589742 2153 scope.go:117] "RemoveContainer" containerID="195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf" Jul 2 07:53:18.590154 env[1228]: time="2024-07-02T07:53:18.590008233Z" level=error msg="ContainerStatus for \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\": not found" Jul 2 07:53:18.590308 kubelet[2153]: E0702 07:53:18.590291 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\": not found" containerID="195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf" Jul 2 07:53:18.590409 kubelet[2153]: I0702 07:53:18.590338 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf"} err="failed to get container status \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"195a6a0e58e4943f2b73b209f9bbb6143800badafc5ce4c2f7f4bed7f91b15bf\": not found" Jul 2 07:53:18.590409 kubelet[2153]: I0702 07:53:18.590355 2153 scope.go:117] "RemoveContainer" containerID="454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9" Jul 2 07:53:18.590729 env[1228]: time="2024-07-02T07:53:18.590642393Z" level=error msg="ContainerStatus for \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\": not found" Jul 2 07:53:18.590884 kubelet[2153]: E0702 07:53:18.590855 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\": not found" containerID="454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9" Jul 2 07:53:18.591060 kubelet[2153]: I0702 07:53:18.590907 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9"} err="failed to get container status \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\": rpc error: code = NotFound desc = an error occurred when try to find container \"454e8c5fa2f92af41481e123ba0defc42fa1997fefa8e95207500fc159b2ada9\": not found" Jul 2 07:53:18.591060 kubelet[2153]: I0702 07:53:18.590924 2153 scope.go:117] "RemoveContainer" containerID="64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859" Jul 2 07:53:18.591267 env[1228]: time="2024-07-02T07:53:18.591174865Z" level=error msg="ContainerStatus for \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\": not found" Jul 2 07:53:18.591491 kubelet[2153]: E0702 07:53:18.591469 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\": not found" containerID="64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859" Jul 2 07:53:18.591651 kubelet[2153]: I0702 07:53:18.591513 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859"} err="failed to get container status \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\": rpc error: code = NotFound desc = an error occurred when try to find container \"64d1269806203dd5f030853131dc5a3ecb419436513ae906028d2af602c78859\": not found" Jul 2 07:53:18.591651 kubelet[2153]: I0702 07:53:18.591529 2153 scope.go:117] "RemoveContainer" containerID="8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208" Jul 2 07:53:18.591900 env[1228]: time="2024-07-02T07:53:18.591810684Z" level=error msg="ContainerStatus for \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\": not found" Jul 2 07:53:18.592045 kubelet[2153]: E0702 07:53:18.592016 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\": not found" containerID="8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208" Jul 2 07:53:18.592137 kubelet[2153]: I0702 07:53:18.592089 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208"} err="failed to get container status \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a3c2f326dfcaa3ac9faa042ac3ed33d2f4cc735b5ff075f93c6a3e2a2871208\": not found" Jul 2 07:53:18.592137 kubelet[2153]: I0702 07:53:18.592109 2153 scope.go:117] "RemoveContainer" containerID="e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c" Jul 2 07:53:18.593957 env[1228]: time="2024-07-02T07:53:18.593909751Z" level=info msg="RemoveContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\"" Jul 2 07:53:18.597924 env[1228]: time="2024-07-02T07:53:18.597876755Z" level=info msg="RemoveContainer for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" returns successfully" Jul 2 07:53:18.598092 kubelet[2153]: I0702 07:53:18.598044 2153 scope.go:117] "RemoveContainer" containerID="e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c" Jul 2 07:53:18.598330 env[1228]: time="2024-07-02T07:53:18.598264582Z" level=error msg="ContainerStatus for \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\": not found" Jul 2 07:53:18.598488 kubelet[2153]: E0702 07:53:18.598466 2153 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\": not found" containerID="e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c" Jul 2 07:53:18.598627 kubelet[2153]: I0702 07:53:18.598506 2153 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c"} err="failed to get container status \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0736a60056a1a813017962d641f643a2a73620f39c5015dd1d9220a8ea5006c\": not found" Jul 2 07:53:19.141865 kubelet[2153]: I0702 07:53:19.141825 2153 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" path="/var/lib/kubelet/pods/4df866f6-ac33-4935-b6ea-7f3926fb754d/volumes" Jul 2 07:53:19.142959 kubelet[2153]: I0702 07:53:19.142929 2153 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" path="/var/lib/kubelet/pods/f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc/volumes" Jul 2 07:53:19.527108 sshd[3693]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:19.532319 systemd-logind[1248]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:53:19.532738 systemd[1]: sshd@27-10.128.0.103:22-147.75.109.163:52172.service: Deactivated successfully. Jul 2 07:53:19.533878 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:53:19.534064 systemd[1]: session-23.scope: Consumed 1.388s CPU time. Jul 2 07:53:19.535775 systemd-logind[1248]: Removed session 23. Jul 2 07:53:19.574060 systemd[1]: Started sshd@28-10.128.0.103:22-147.75.109.163:52178.service. Jul 2 07:53:19.868192 sshd[3859]: Accepted publickey for core from 147.75.109.163 port 52178 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:19.870138 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:19.876991 systemd[1]: Started session-24.scope. Jul 2 07:53:19.878251 systemd-logind[1248]: New session 24 of user core. Jul 2 07:53:20.900097 sshd[3859]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:20.905708 systemd-logind[1248]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:53:20.906920 systemd[1]: sshd@28-10.128.0.103:22-147.75.109.163:52178.service: Deactivated successfully. Jul 2 07:53:20.908105 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:53:20.909477 systemd-logind[1248]: Removed session 24. Jul 2 07:53:20.917217 kubelet[2153]: I0702 07:53:20.917191 2153 topology_manager.go:215] "Topology Admit Handler" podUID="6474aced-10f3-4dc0-a777-3e5807d064b9" podNamespace="kube-system" podName="cilium-km6m8" Jul 2 07:53:20.917920 kubelet[2153]: E0702 07:53:20.917880 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="mount-cgroup" Jul 2 07:53:20.918105 kubelet[2153]: E0702 07:53:20.918086 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" containerName="cilium-operator" Jul 2 07:53:20.918256 kubelet[2153]: E0702 07:53:20.918239 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="clean-cilium-state" Jul 2 07:53:20.918395 kubelet[2153]: E0702 07:53:20.918377 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="cilium-agent" Jul 2 07:53:20.918595 kubelet[2153]: E0702 07:53:20.918578 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="apply-sysctl-overwrites" Jul 2 07:53:20.918746 kubelet[2153]: E0702 07:53:20.918729 2153 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="mount-bpf-fs" Jul 2 07:53:20.918932 kubelet[2153]: I0702 07:53:20.918907 2153 memory_manager.go:346] "RemoveStaleState removing state" podUID="f6cb6dc5-21af-476d-a64b-55e4c7bb9dbc" containerName="cilium-operator" Jul 2 07:53:20.919079 kubelet[2153]: I0702 07:53:20.919064 2153 memory_manager.go:346] "RemoveStaleState removing state" podUID="4df866f6-ac33-4935-b6ea-7f3926fb754d" containerName="cilium-agent" Jul 2 07:53:20.929110 systemd[1]: Created slice kubepods-burstable-pod6474aced_10f3_4dc0_a777_3e5807d064b9.slice. Jul 2 07:53:20.946664 systemd[1]: Started sshd@29-10.128.0.103:22-147.75.109.163:52190.service. Jul 2 07:53:20.970459 kubelet[2153]: W0702 07:53:20.970401 2153 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.970795 kubelet[2153]: E0702 07:53:20.970772 2153 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.971730 kubelet[2153]: W0702 07:53:20.971700 2153 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.971934 kubelet[2153]: E0702 07:53:20.971909 2153 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.976298 kubelet[2153]: W0702 07:53:20.976266 2153 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.976521 kubelet[2153]: E0702 07:53:20.976497 2153 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.976841 kubelet[2153]: W0702 07:53:20.976815 2153 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:20.977004 kubelet[2153]: E0702 07:53:20.976984 2153 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal' and this object Jul 2 07:53:21.005132 kubelet[2153]: I0702 07:53:21.005090 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-hostproc\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.005523 kubelet[2153]: I0702 07:53:21.005497 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-etc-cni-netd\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.005814 kubelet[2153]: I0702 07:53:21.005791 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-lib-modules\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.006041 kubelet[2153]: I0702 07:53:21.006020 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-run\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.006268 kubelet[2153]: I0702 07:53:21.006236 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-xtables-lock\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.006505 kubelet[2153]: I0702 07:53:21.006486 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cni-path\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.006743 kubelet[2153]: I0702 07:53:21.006722 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-clustermesh-secrets\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.007006 kubelet[2153]: I0702 07:53:21.006986 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-hubble-tls\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.007265 kubelet[2153]: I0702 07:53:21.007232 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh425\" (UniqueName: \"kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-kube-api-access-qh425\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.007509 kubelet[2153]: I0702 07:53:21.007489 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-ipsec-secrets\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.007762 kubelet[2153]: I0702 07:53:21.007742 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-bpf-maps\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.007983 kubelet[2153]: I0702 07:53:21.007963 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-net\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.008224 kubelet[2153]: I0702 07:53:21.008204 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-cgroup\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.008489 kubelet[2153]: I0702 07:53:21.008470 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-config-path\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.008679 kubelet[2153]: I0702 07:53:21.008661 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-kernel\") pod \"cilium-km6m8\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " pod="kube-system/cilium-km6m8" Jul 2 07:53:21.246528 sshd[3871]: Accepted publickey for core from 147.75.109.163 port 52190 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:21.249798 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:21.256772 systemd[1]: Started session-25.scope. Jul 2 07:53:21.258679 systemd-logind[1248]: New session 25 of user core. Jul 2 07:53:21.525627 kubelet[2153]: E0702 07:53:21.525510 2153 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-km6m8" podUID="6474aced-10f3-4dc0-a777-3e5807d064b9" Jul 2 07:53:21.555921 sshd[3871]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:21.560547 systemd[1]: sshd@29-10.128.0.103:22-147.75.109.163:52190.service: Deactivated successfully. Jul 2 07:53:21.561695 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:53:21.562657 systemd-logind[1248]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:53:21.563972 systemd-logind[1248]: Removed session 25. Jul 2 07:53:21.604727 systemd[1]: Started sshd@30-10.128.0.103:22-147.75.109.163:52192.service. Jul 2 07:53:21.612939 kubelet[2153]: I0702 07:53:21.612892 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-hostproc" (OuterVolumeSpecName: "hostproc") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.613122 kubelet[2153]: I0702 07:53:21.612964 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-hostproc\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613500 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh425\" (UniqueName: \"kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-kube-api-access-qh425\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613591 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cni-path\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613689 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-kernel\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613741 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-etc-cni-netd\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613773 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-xtables-lock\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614074 kubelet[2153]: I0702 07:53:21.613829 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-lib-modules\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.613859 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-run\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.613908 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-bpf-maps\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.613941 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-cgroup\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.614006 2153 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-net\") pod \"6474aced-10f3-4dc0-a777-3e5807d064b9\" (UID: \"6474aced-10f3-4dc0-a777-3e5807d064b9\") " Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.614160 2153 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-hostproc\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.614512 kubelet[2153]: I0702 07:53:21.614215 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.614849 kubelet[2153]: I0702 07:53:21.614252 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cni-path" (OuterVolumeSpecName: "cni-path") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.614849 kubelet[2153]: I0702 07:53:21.614297 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.614849 kubelet[2153]: I0702 07:53:21.614326 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.614849 kubelet[2153]: I0702 07:53:21.614354 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.614849 kubelet[2153]: I0702 07:53:21.614398 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.615107 kubelet[2153]: I0702 07:53:21.614447 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.615107 kubelet[2153]: I0702 07:53:21.614476 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.615107 kubelet[2153]: I0702 07:53:21.614502 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:53:21.618412 kubelet[2153]: I0702 07:53:21.618379 2153 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-kube-api-access-qh425" (OuterVolumeSpecName: "kube-api-access-qh425") pod "6474aced-10f3-4dc0-a777-3e5807d064b9" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9"). InnerVolumeSpecName "kube-api-access-qh425". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:53:21.620483 systemd[1]: var-lib-kubelet-pods-6474aced\x2d10f3\x2d4dc0\x2da777\x2d3e5807d064b9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqh425.mount: Deactivated successfully. Jul 2 07:53:21.715325 kubelet[2153]: I0702 07:53:21.715277 2153 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cni-path\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715325 kubelet[2153]: I0702 07:53:21.715328 2153 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-kernel\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715347 2153 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-etc-cni-netd\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715364 2153 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-xtables-lock\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715383 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-run\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715415 2153 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-bpf-maps\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715444 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-cgroup\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715462 2153 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-lib-modules\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.715609 kubelet[2153]: I0702 07:53:21.715482 2153 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6474aced-10f3-4dc0-a777-3e5807d064b9-host-proc-sys-net\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.716016 kubelet[2153]: I0702 07:53:21.715503 2153 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qh425\" (UniqueName: \"kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-kube-api-access-qh425\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:21.911608 sshd[3884]: Accepted publickey for core from 147.75.109.163 port 52192 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:53:21.913610 sshd[3884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:21.922298 systemd[1]: Started session-26.scope. Jul 2 07:53:21.924501 systemd-logind[1248]: New session 26 of user core. Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.109924 2153 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.110070 2153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-config-path podName:6474aced-10f3-4dc0-a777-3e5807d064b9 nodeName:}" failed. No retries permitted until 2024-07-02 07:53:22.610023026 +0000 UTC m=+115.753217994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-config-path") pod "cilium-km6m8" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.110136 2153 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.110160 2153 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-km6m8: failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.110217 2153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-hubble-tls podName:6474aced-10f3-4dc0-a777-3e5807d064b9 nodeName:}" failed. No retries permitted until 2024-07-02 07:53:22.610199864 +0000 UTC m=+115.753394822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-hubble-tls") pod "cilium-km6m8" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.110413 kubelet[2153]: E0702 07:53:22.110242 2153 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.111161 kubelet[2153]: E0702 07:53:22.110280 2153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-ipsec-secrets podName:6474aced-10f3-4dc0-a777-3e5807d064b9 nodeName:}" failed. No retries permitted until 2024-07-02 07:53:22.610269955 +0000 UTC m=+115.753464896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-ipsec-secrets") pod "cilium-km6m8" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.111161 kubelet[2153]: E0702 07:53:22.110301 2153 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.111161 kubelet[2153]: E0702 07:53:22.110328 2153 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-clustermesh-secrets podName:6474aced-10f3-4dc0-a777-3e5807d064b9 nodeName:}" failed. No retries permitted until 2024-07-02 07:53:22.610319315 +0000 UTC m=+115.753514272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-clustermesh-secrets") pod "cilium-km6m8" (UID: "6474aced-10f3-4dc0-a777-3e5807d064b9") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:53:22.304021 kubelet[2153]: E0702 07:53:22.303875 2153 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:53:22.553903 systemd[1]: Removed slice kubepods-burstable-pod6474aced_10f3_4dc0_a777_3e5807d064b9.slice. Jul 2 07:53:22.587268 kubelet[2153]: I0702 07:53:22.587127 2153 topology_manager.go:215] "Topology Admit Handler" podUID="039717d2-eafe-4664-8a63-1be07bc18896" podNamespace="kube-system" podName="cilium-7bk6c" Jul 2 07:53:22.598148 systemd[1]: Created slice kubepods-burstable-pod039717d2_eafe_4664_8a63_1be07bc18896.slice. Jul 2 07:53:22.622974 kubelet[2153]: I0702 07:53:22.622937 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-host-proc-sys-net\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623012 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s966m\" (UniqueName: \"kubernetes.io/projected/039717d2-eafe-4664-8a63-1be07bc18896-kube-api-access-s966m\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623066 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-host-proc-sys-kernel\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623103 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-bpf-maps\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623158 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-etc-cni-netd\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623195 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-cilium-run\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623277 kubelet[2153]: I0702 07:53:22.623247 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-hostproc\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623279 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-lib-modules\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623331 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/039717d2-eafe-4664-8a63-1be07bc18896-cilium-ipsec-secrets\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623387 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-xtables-lock\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623447 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/039717d2-eafe-4664-8a63-1be07bc18896-cilium-config-path\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623483 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/039717d2-eafe-4664-8a63-1be07bc18896-hubble-tls\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.623679 kubelet[2153]: I0702 07:53:22.623543 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/039717d2-eafe-4664-8a63-1be07bc18896-clustermesh-secrets\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623579 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-cni-path\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623641 2153 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/039717d2-eafe-4664-8a63-1be07bc18896-cilium-cgroup\") pod \"cilium-7bk6c\" (UID: \"039717d2-eafe-4664-8a63-1be07bc18896\") " pod="kube-system/cilium-7bk6c" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623698 2153 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-clustermesh-secrets\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623720 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-config-path\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623742 2153 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6474aced-10f3-4dc0-a777-3e5807d064b9-cilium-ipsec-secrets\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:22.624008 kubelet[2153]: I0702 07:53:22.623783 2153 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6474aced-10f3-4dc0-a777-3e5807d064b9-hubble-tls\") on node \"ci-3510-3-5-2480589a916679c70820.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:53:22.907658 env[1228]: time="2024-07-02T07:53:22.907575992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bk6c,Uid:039717d2-eafe-4664-8a63-1be07bc18896,Namespace:kube-system,Attempt:0,}" Jul 2 07:53:22.935001 env[1228]: time="2024-07-02T07:53:22.934913725Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:53:22.935247 env[1228]: time="2024-07-02T07:53:22.934974629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:53:22.935247 env[1228]: time="2024-07-02T07:53:22.934992599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:53:22.936068 env[1228]: time="2024-07-02T07:53:22.935261560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf pid=3909 runtime=io.containerd.runc.v2 Jul 2 07:53:22.951230 systemd[1]: Started cri-containerd-19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf.scope. Jul 2 07:53:22.989461 env[1228]: time="2024-07-02T07:53:22.989366917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7bk6c,Uid:039717d2-eafe-4664-8a63-1be07bc18896,Namespace:kube-system,Attempt:0,} returns sandbox id \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\"" Jul 2 07:53:22.993873 env[1228]: time="2024-07-02T07:53:22.993340284Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:53:23.009525 env[1228]: time="2024-07-02T07:53:23.009482179Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933\"" Jul 2 07:53:23.012296 env[1228]: time="2024-07-02T07:53:23.011323056Z" level=info msg="StartContainer for \"40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933\"" Jul 2 07:53:23.035385 systemd[1]: Started cri-containerd-40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933.scope. Jul 2 07:53:23.074069 env[1228]: time="2024-07-02T07:53:23.074012503Z" level=info msg="StartContainer for \"40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933\" returns successfully" Jul 2 07:53:23.086360 systemd[1]: cri-containerd-40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933.scope: Deactivated successfully. Jul 2 07:53:23.124780 env[1228]: time="2024-07-02T07:53:23.124716010Z" level=info msg="shim disconnected" id=40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933 Jul 2 07:53:23.125076 env[1228]: time="2024-07-02T07:53:23.124784757Z" level=warning msg="cleaning up after shim disconnected" id=40b18b22ab1938b4a48dcc5b15573fc7b7979797df51784ea5058d3a58059933 namespace=k8s.io Jul 2 07:53:23.125076 env[1228]: time="2024-07-02T07:53:23.124799762Z" level=info msg="cleaning up dead shim" Jul 2 07:53:23.136831 env[1228]: time="2024-07-02T07:53:23.136765722Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3998 runtime=io.containerd.runc.v2\n" Jul 2 07:53:23.142863 kubelet[2153]: I0702 07:53:23.142790 2153 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6474aced-10f3-4dc0-a777-3e5807d064b9" path="/var/lib/kubelet/pods/6474aced-10f3-4dc0-a777-3e5807d064b9/volumes" Jul 2 07:53:23.560133 env[1228]: time="2024-07-02T07:53:23.560068726Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:53:23.610177 env[1228]: time="2024-07-02T07:53:23.610119350Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31\"" Jul 2 07:53:23.618629 env[1228]: time="2024-07-02T07:53:23.618572260Z" level=info msg="StartContainer for \"2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31\"" Jul 2 07:53:23.662476 systemd[1]: Started cri-containerd-2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31.scope. Jul 2 07:53:23.727047 env[1228]: time="2024-07-02T07:53:23.726984996Z" level=info msg="StartContainer for \"2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31\" returns successfully" Jul 2 07:53:23.742323 systemd[1]: cri-containerd-2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31.scope: Deactivated successfully. Jul 2 07:53:23.776064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31-rootfs.mount: Deactivated successfully. Jul 2 07:53:23.781990 env[1228]: time="2024-07-02T07:53:23.781919584Z" level=info msg="shim disconnected" id=2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31 Jul 2 07:53:23.781990 env[1228]: time="2024-07-02T07:53:23.781990624Z" level=warning msg="cleaning up after shim disconnected" id=2560e92b0caad445026c29ee7f4f8e5a6ece165c9328a1bbfb9b80aea6894f31 namespace=k8s.io Jul 2 07:53:23.782335 env[1228]: time="2024-07-02T07:53:23.782005498Z" level=info msg="cleaning up dead shim" Jul 2 07:53:23.793737 env[1228]: time="2024-07-02T07:53:23.793684202Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Jul 2 07:53:24.560373 env[1228]: time="2024-07-02T07:53:24.560313508Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:53:24.589724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082115386.mount: Deactivated successfully. Jul 2 07:53:24.591254 env[1228]: time="2024-07-02T07:53:24.591204399Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4\"" Jul 2 07:53:24.592749 env[1228]: time="2024-07-02T07:53:24.592712203Z" level=info msg="StartContainer for \"9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4\"" Jul 2 07:53:24.628258 systemd[1]: Started cri-containerd-9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4.scope. Jul 2 07:53:24.680262 systemd[1]: cri-containerd-9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4.scope: Deactivated successfully. Jul 2 07:53:24.681729 env[1228]: time="2024-07-02T07:53:24.681481648Z" level=info msg="StartContainer for \"9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4\" returns successfully" Jul 2 07:53:24.718050 env[1228]: time="2024-07-02T07:53:24.717986023Z" level=info msg="shim disconnected" id=9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4 Jul 2 07:53:24.718346 env[1228]: time="2024-07-02T07:53:24.718054154Z" level=warning msg="cleaning up after shim disconnected" id=9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4 namespace=k8s.io Jul 2 07:53:24.718346 env[1228]: time="2024-07-02T07:53:24.718068998Z" level=info msg="cleaning up dead shim" Jul 2 07:53:24.731627 env[1228]: time="2024-07-02T07:53:24.731579713Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4117 runtime=io.containerd.runc.v2\n" Jul 2 07:53:24.736016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e507f78242cddb3c823f89ab896d1699f79d02e1419348060fc93ecbf7b59f4-rootfs.mount: Deactivated successfully. Jul 2 07:53:25.565729 env[1228]: time="2024-07-02T07:53:25.565396310Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:53:25.600652 env[1228]: time="2024-07-02T07:53:25.600592240Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179\"" Jul 2 07:53:25.601637 env[1228]: time="2024-07-02T07:53:25.601596673Z" level=info msg="StartContainer for \"3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179\"" Jul 2 07:53:25.635321 systemd[1]: Started cri-containerd-3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179.scope. Jul 2 07:53:25.685179 systemd[1]: cri-containerd-3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179.scope: Deactivated successfully. Jul 2 07:53:25.686717 env[1228]: time="2024-07-02T07:53:25.686665165Z" level=info msg="StartContainer for \"3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179\" returns successfully" Jul 2 07:53:25.715898 env[1228]: time="2024-07-02T07:53:25.715808142Z" level=info msg="shim disconnected" id=3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179 Jul 2 07:53:25.715898 env[1228]: time="2024-07-02T07:53:25.715885764Z" level=warning msg="cleaning up after shim disconnected" id=3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179 namespace=k8s.io Jul 2 07:53:25.715898 env[1228]: time="2024-07-02T07:53:25.715900730Z" level=info msg="cleaning up dead shim" Jul 2 07:53:25.726491 env[1228]: time="2024-07-02T07:53:25.726430569Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:53:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4173 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:53:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Jul 2 07:53:25.735797 systemd[1]: run-containerd-runc-k8s.io-3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179-runc.DlBNrs.mount: Deactivated successfully. Jul 2 07:53:25.735944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fa3a9e500c68e542f651be0343250085199e3043f72c735292510fffce27179-rootfs.mount: Deactivated successfully. Jul 2 07:53:26.139081 kubelet[2153]: E0702 07:53:26.139033 2153 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-wqrbm" podUID="093686ea-bbf5-45d6-a6f6-df501bc39fb5" Jul 2 07:53:26.571597 env[1228]: time="2024-07-02T07:53:26.571468448Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:53:26.600598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693504587.mount: Deactivated successfully. Jul 2 07:53:26.604829 env[1228]: time="2024-07-02T07:53:26.604747760Z" level=info msg="CreateContainer within sandbox \"19e075718f12391a73e25b0990080e3c6722e184da96b19dd9b01e57d40c46bf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f\"" Jul 2 07:53:26.606014 env[1228]: time="2024-07-02T07:53:26.605972308Z" level=info msg="StartContainer for \"51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f\"" Jul 2 07:53:26.649246 systemd[1]: Started cri-containerd-51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f.scope. Jul 2 07:53:26.695537 env[1228]: time="2024-07-02T07:53:26.695394977Z" level=info msg="StartContainer for \"51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f\" returns successfully" Jul 2 07:53:26.737294 systemd[1]: run-containerd-runc-k8s.io-51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f-runc.6Gk22h.mount: Deactivated successfully. Jul 2 07:53:27.097882 env[1228]: time="2024-07-02T07:53:27.097821538Z" level=info msg="StopPodSandbox for \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\"" Jul 2 07:53:27.098095 env[1228]: time="2024-07-02T07:53:27.097949923Z" level=info msg="TearDown network for sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" successfully" Jul 2 07:53:27.098095 env[1228]: time="2024-07-02T07:53:27.097998431Z" level=info msg="StopPodSandbox for \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" returns successfully" Jul 2 07:53:27.098558 env[1228]: time="2024-07-02T07:53:27.098522545Z" level=info msg="RemovePodSandbox for \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\"" Jul 2 07:53:27.098713 env[1228]: time="2024-07-02T07:53:27.098566209Z" level=info msg="Forcibly stopping sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\"" Jul 2 07:53:27.098713 env[1228]: time="2024-07-02T07:53:27.098672847Z" level=info msg="TearDown network for sandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" successfully" Jul 2 07:53:27.103356 env[1228]: time="2024-07-02T07:53:27.103314547Z" level=info msg="RemovePodSandbox \"bccec4029e688fae299f2e66dfc1ae00e878c7d33ea560aa71e7c9988c996d4d\" returns successfully" Jul 2 07:53:27.103884 env[1228]: time="2024-07-02T07:53:27.103837982Z" level=info msg="StopPodSandbox for \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\"" Jul 2 07:53:27.103998 env[1228]: time="2024-07-02T07:53:27.103941842Z" level=info msg="TearDown network for sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" successfully" Jul 2 07:53:27.103998 env[1228]: time="2024-07-02T07:53:27.103988240Z" level=info msg="StopPodSandbox for \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" returns successfully" Jul 2 07:53:27.104383 env[1228]: time="2024-07-02T07:53:27.104337814Z" level=info msg="RemovePodSandbox for \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\"" Jul 2 07:53:27.104516 env[1228]: time="2024-07-02T07:53:27.104380347Z" level=info msg="Forcibly stopping sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\"" Jul 2 07:53:27.104516 env[1228]: time="2024-07-02T07:53:27.104494652Z" level=info msg="TearDown network for sandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" successfully" Jul 2 07:53:27.108510 env[1228]: time="2024-07-02T07:53:27.108472987Z" level=info msg="RemovePodSandbox \"3f6285e0ade86905e65e89e4fe0943e0cda5d2c63b6c482181d3dea41340412b\" returns successfully" Jul 2 07:53:27.144595 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:53:27.597924 kubelet[2153]: I0702 07:53:27.597877 2153 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7bk6c" podStartSLOduration=5.597821763 podCreationTimestamp="2024-07-02 07:53:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:53:27.594081109 +0000 UTC m=+120.737276088" watchObservedRunningTime="2024-07-02 07:53:27.597821763 +0000 UTC m=+120.741016732" Jul 2 07:53:30.094195 systemd-networkd[1031]: lxc_health: Link UP Jul 2 07:53:30.139454 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:53:30.139883 systemd-networkd[1031]: lxc_health: Gained carrier Jul 2 07:53:30.537069 systemd[1]: run-containerd-runc-k8s.io-51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f-runc.CA16gT.mount: Deactivated successfully. Jul 2 07:53:31.323275 systemd-networkd[1031]: lxc_health: Gained IPv6LL Jul 2 07:53:32.321630 systemd[1]: Started sshd@31-10.128.0.103:22-156.255.1.88:38766.service. Jul 2 07:53:33.121857 sshd[4789]: Failed password for root from 156.255.1.88 port 38766 ssh2 Jul 2 07:53:33.269952 sshd[4789]: Received disconnect from 156.255.1.88 port 38766:11: Bye Bye [preauth] Jul 2 07:53:33.270210 sshd[4789]: Disconnected from authenticating user root 156.255.1.88 port 38766 [preauth] Jul 2 07:53:33.272241 systemd[1]: sshd@31-10.128.0.103:22-156.255.1.88:38766.service: Deactivated successfully. Jul 2 07:53:35.059266 systemd[1]: run-containerd-runc-k8s.io-51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f-runc.t7HnSX.mount: Deactivated successfully. Jul 2 07:53:37.354028 systemd[1]: run-containerd-runc-k8s.io-51a5342fa6d967c04bf144ed31939e64bdc213a4530bb85b327288c848e5039f-runc.x8aTd9.mount: Deactivated successfully. Jul 2 07:53:37.471232 sshd[3884]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:37.475775 systemd[1]: sshd@30-10.128.0.103:22-147.75.109.163:52192.service: Deactivated successfully. Jul 2 07:53:37.476889 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:53:37.477877 systemd-logind[1248]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:53:37.479173 systemd-logind[1248]: Removed session 26.