Jul 2 07:53:21.024430 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:53:21.024461 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:21.024474 kernel: BIOS-provided physical RAM map: Jul 2 07:53:21.024484 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:53:21.024493 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 07:53:21.024503 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 07:53:21.024517 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Jul 2 07:53:21.024528 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 07:53:21.024538 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 07:53:21.024548 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 07:53:21.024559 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 07:53:21.024569 kernel: printk: bootconsole [earlyser0] enabled Jul 2 07:53:21.024579 kernel: NX (Execute Disable) protection: active Jul 2 07:53:21.024589 kernel: efi: EFI v2.70 by Microsoft Jul 2 07:53:21.024605 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c7a98 RNG=0x3ffd1018 Jul 2 07:53:21.024617 kernel: random: crng init done Jul 2 07:53:21.024628 kernel: SMBIOS 3.1.0 present. Jul 2 07:53:21.024639 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 07:53:21.024650 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 07:53:21.024661 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 07:53:21.024672 kernel: Hyper-V Host Build:20348-10.0-1-0.1633 Jul 2 07:53:21.024683 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 07:53:21.024696 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 07:53:21.024707 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 07:53:21.024718 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 07:53:21.024729 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 07:53:21.024741 kernel: tsc: Detected 2593.906 MHz processor Jul 2 07:53:21.024753 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:53:21.024765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:53:21.024776 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 07:53:21.024787 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:53:21.024798 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 07:53:21.024812 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 07:53:21.024823 kernel: Using GB pages for direct mapping Jul 2 07:53:21.024835 kernel: Secure boot disabled Jul 2 07:53:21.024846 kernel: ACPI: Early table checksum verification disabled Jul 2 07:53:21.024857 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 07:53:21.024869 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024880 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024892 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 07:53:21.024910 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 07:53:21.024923 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024935 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024947 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024959 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024971 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024986 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.024998 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:53:21.025010 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 07:53:21.025022 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 07:53:21.025034 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 07:53:21.025046 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 07:53:21.025059 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 07:53:21.025071 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 07:53:21.025085 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 07:53:21.025097 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 07:53:21.025110 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 07:53:21.025122 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 07:53:21.025134 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:53:21.025146 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:53:21.025158 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 07:53:21.025179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 07:53:21.025192 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 07:53:21.025207 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 07:53:21.025219 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 07:53:21.025232 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 07:53:21.025244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 07:53:21.025256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 07:53:21.025268 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 07:53:21.025280 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 07:53:21.025293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 07:53:21.025305 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 07:53:21.025320 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 07:53:21.025332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 07:53:21.025344 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 07:53:21.025356 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 07:53:21.025369 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 07:53:21.025381 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 07:53:21.025393 kernel: Zone ranges: Jul 2 07:53:21.025406 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:53:21.025418 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:53:21.025432 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:53:21.025445 kernel: Movable zone start for each node Jul 2 07:53:21.025457 kernel: Early memory node ranges Jul 2 07:53:21.025469 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:53:21.025497 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 07:53:21.025510 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 07:53:21.025522 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:53:21.025535 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 07:53:21.025548 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:53:21.025563 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:53:21.025575 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 07:53:21.025588 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 07:53:21.025600 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 07:53:21.025613 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:53:21.025625 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:53:21.025638 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:53:21.025650 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 07:53:21.025662 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:53:21.025678 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 07:53:21.025690 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 07:53:21.025703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:53:21.025716 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:53:21.025728 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:53:21.025741 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:53:21.025754 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:53:21.025766 kernel: Hyper-V: PV spinlocks enabled Jul 2 07:53:21.025779 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:53:21.025794 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 07:53:21.025806 kernel: Policy zone: Normal Jul 2 07:53:21.025820 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:21.025833 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:53:21.025846 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:53:21.025859 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:53:21.025871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:53:21.025884 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 308056K reserved, 0K cma-reserved) Jul 2 07:53:21.025900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:53:21.025913 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:53:21.025934 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:53:21.025950 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:53:21.025964 kernel: rcu: RCU event tracing is enabled. Jul 2 07:53:21.025977 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:53:21.025991 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:53:21.026004 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:53:21.026017 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:53:21.026031 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:53:21.026044 kernel: Using NULL legacy PIC Jul 2 07:53:21.026060 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 07:53:21.026073 kernel: Console: colour dummy device 80x25 Jul 2 07:53:21.026086 kernel: printk: console [tty1] enabled Jul 2 07:53:21.026099 kernel: printk: console [ttyS0] enabled Jul 2 07:53:21.026112 kernel: printk: bootconsole [earlyser0] disabled Jul 2 07:53:21.026128 kernel: ACPI: Core revision 20210730 Jul 2 07:53:21.026141 kernel: Failed to register legacy timer interrupt Jul 2 07:53:21.026155 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:53:21.026174 kernel: Hyper-V: Using IPI hypercalls Jul 2 07:53:21.026187 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jul 2 07:53:21.026200 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 07:53:21.026213 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 07:53:21.026226 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:53:21.026239 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:53:21.026252 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:53:21.026267 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:53:21.026282 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 07:53:21.026295 kernel: RETBleed: Vulnerable Jul 2 07:53:21.026308 kernel: Speculative Store Bypass: Vulnerable Jul 2 07:53:21.026321 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:53:21.026334 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:53:21.026347 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 07:53:21.026360 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:53:21.026373 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:53:21.026387 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:53:21.026403 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 07:53:21.026416 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 07:53:21.026429 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 07:53:21.026442 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:53:21.026455 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 07:53:21.026469 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 07:53:21.026481 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 07:53:21.026495 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 07:53:21.026508 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:53:21.026521 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:53:21.026534 kernel: LSM: Security Framework initializing Jul 2 07:53:21.026547 kernel: SELinux: Initializing. Jul 2 07:53:21.026562 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:53:21.026575 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:53:21.026588 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 07:53:21.026602 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 07:53:21.026615 kernel: signal: max sigframe size: 3632 Jul 2 07:53:21.026628 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:53:21.026642 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:53:21.026655 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:53:21.026668 kernel: x86: Booting SMP configuration: Jul 2 07:53:21.026682 kernel: .... node #0, CPUs: #1 Jul 2 07:53:21.026698 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 07:53:21.026711 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:53:21.026725 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:53:21.026738 kernel: smpboot: Max logical packages: 1 Jul 2 07:53:21.026752 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 07:53:21.026765 kernel: devtmpfs: initialized Jul 2 07:53:21.026778 kernel: x86/mm: Memory block size: 128MB Jul 2 07:53:21.026791 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 07:53:21.026807 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:53:21.026821 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:53:21.026835 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:53:21.026848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:53:21.026861 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:53:21.026875 kernel: audit: type=2000 audit(1719906799.023:1): state=initialized audit_enabled=0 res=1 Jul 2 07:53:21.026888 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:53:21.026902 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:53:21.026915 kernel: cpuidle: using governor menu Jul 2 07:53:21.026930 kernel: ACPI: bus type PCI registered Jul 2 07:53:21.026944 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:53:21.026957 kernel: dca service started, version 1.12.1 Jul 2 07:53:21.026970 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:53:21.026983 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:53:21.026997 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:53:21.027010 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:53:21.027023 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:53:21.027036 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:53:21.027052 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:53:21.027065 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:53:21.027078 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:53:21.027091 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:53:21.027105 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:53:21.027118 kernel: ACPI: Interpreter enabled Jul 2 07:53:21.027131 kernel: ACPI: PM: (supports S0 S5) Jul 2 07:53:21.027144 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:53:21.027158 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:53:21.036202 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 07:53:21.036221 kernel: iommu: Default domain type: Translated Jul 2 07:53:21.036233 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:53:21.036247 kernel: vgaarb: loaded Jul 2 07:53:21.036261 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:53:21.036276 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:53:21.036289 kernel: PTP clock support registered Jul 2 07:53:21.036302 kernel: Registered efivars operations Jul 2 07:53:21.036316 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:53:21.036330 kernel: PCI: System does not support PCI Jul 2 07:53:21.036348 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 07:53:21.036361 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:53:21.036376 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:53:21.036390 kernel: pnp: PnP ACPI init Jul 2 07:53:21.036404 kernel: pnp: PnP ACPI: found 3 devices Jul 2 07:53:21.036417 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:53:21.036432 kernel: NET: Registered PF_INET protocol family Jul 2 07:53:21.036446 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:53:21.036463 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:53:21.036477 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:53:21.036492 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:53:21.036504 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:53:21.036516 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:53:21.036528 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:53:21.036538 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:53:21.036550 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:53:21.036562 kernel: NET: Registered PF_XDP protocol family Jul 2 07:53:21.036578 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:53:21.036591 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:53:21.036604 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Jul 2 07:53:21.036618 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:53:21.036632 kernel: Initialise system trusted keyrings Jul 2 07:53:21.036645 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:53:21.036659 kernel: Key type asymmetric registered Jul 2 07:53:21.036672 kernel: Asymmetric key parser 'x509' registered Jul 2 07:53:21.036686 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:53:21.036702 kernel: io scheduler mq-deadline registered Jul 2 07:53:21.036714 kernel: io scheduler kyber registered Jul 2 07:53:21.036727 kernel: io scheduler bfq registered Jul 2 07:53:21.036740 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:53:21.036754 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:53:21.036767 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:53:21.036781 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:53:21.036792 kernel: i8042: PNP: No PS/2 controller found. Jul 2 07:53:21.036962 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 07:53:21.037078 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T07:53:20 UTC (1719906800) Jul 2 07:53:21.037200 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 07:53:21.037218 kernel: fail to initialize ptp_kvm Jul 2 07:53:21.037232 kernel: intel_pstate: CPU model not supported Jul 2 07:53:21.037245 kernel: efifb: probing for efifb Jul 2 07:53:21.037269 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 07:53:21.037282 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 07:53:21.037295 kernel: efifb: scrolling: redraw Jul 2 07:53:21.037312 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:53:21.037324 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:53:21.037338 kernel: fb0: EFI VGA frame buffer device Jul 2 07:53:21.037351 kernel: pstore: Registered efi as persistent store backend Jul 2 07:53:21.037363 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:53:21.037376 kernel: Segment Routing with IPv6 Jul 2 07:53:21.037389 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:53:21.037401 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:53:21.037414 kernel: Key type dns_resolver registered Jul 2 07:53:21.037429 kernel: IPI shorthand broadcast: enabled Jul 2 07:53:21.037442 kernel: sched_clock: Marking stable (715951100, 24111700)->(916978500, -176915700) Jul 2 07:53:21.037455 kernel: registered taskstats version 1 Jul 2 07:53:21.037467 kernel: Loading compiled-in X.509 certificates Jul 2 07:53:21.037481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:53:21.037493 kernel: Key type .fscrypt registered Jul 2 07:53:21.037506 kernel: Key type fscrypt-provisioning registered Jul 2 07:53:21.037519 kernel: pstore: Using crash dump compression: deflate Jul 2 07:53:21.037534 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:53:21.037547 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:53:21.037560 kernel: ima: No architecture policies found Jul 2 07:53:21.037572 kernel: clk: Disabling unused clocks Jul 2 07:53:21.037585 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:53:21.037598 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:53:21.037611 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:53:21.037625 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:53:21.037637 kernel: Run /init as init process Jul 2 07:53:21.037650 kernel: with arguments: Jul 2 07:53:21.037665 kernel: /init Jul 2 07:53:21.037678 kernel: with environment: Jul 2 07:53:21.037691 kernel: HOME=/ Jul 2 07:53:21.037703 kernel: TERM=linux Jul 2 07:53:21.037716 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:53:21.037732 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:21.037748 systemd[1]: Detected virtualization microsoft. Jul 2 07:53:21.037765 systemd[1]: Detected architecture x86-64. Jul 2 07:53:21.037778 systemd[1]: Running in initrd. Jul 2 07:53:21.037792 systemd[1]: No hostname configured, using default hostname. Jul 2 07:53:21.037805 systemd[1]: Hostname set to . Jul 2 07:53:21.037819 systemd[1]: Initializing machine ID from random generator. Jul 2 07:53:21.037833 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:53:21.037847 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:21.037860 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:21.037874 systemd[1]: Reached target paths.target. Jul 2 07:53:21.037889 systemd[1]: Reached target slices.target. Jul 2 07:53:21.037903 systemd[1]: Reached target swap.target. Jul 2 07:53:21.037916 systemd[1]: Reached target timers.target. Jul 2 07:53:21.037931 systemd[1]: Listening on iscsid.socket. Jul 2 07:53:21.037945 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:53:21.037958 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:53:21.037972 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:53:21.037988 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:53:21.038002 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:21.038016 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:21.038029 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:21.038043 systemd[1]: Reached target sockets.target. Jul 2 07:53:21.038057 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:21.038070 systemd[1]: Finished network-cleanup.service. Jul 2 07:53:21.038084 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:53:21.038098 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:21.038114 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:21.038127 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:21.038141 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:53:21.038158 systemd-journald[183]: Journal started Jul 2 07:53:21.038232 systemd-journald[183]: Runtime Journal (/run/log/journal/df71819768054876a207564af9413011) is 8.0M, max 159.0M, 151.0M free. Jul 2 07:53:21.008058 systemd-modules-load[184]: Inserted module 'overlay' Jul 2 07:53:21.050184 systemd[1]: Started systemd-journald.service. Jul 2 07:53:21.062135 systemd-resolved[185]: Positive Trust Anchors: Jul 2 07:53:21.062154 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:21.062199 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:21.162220 kernel: audit: type=1130 audit(1719906801.065:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.162256 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:53:21.162273 kernel: Bridge firewalling registered Jul 2 07:53:21.162289 kernel: audit: type=1130 audit(1719906801.093:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.162305 kernel: audit: type=1130 audit(1719906801.096:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.162321 kernel: audit: type=1130 audit(1719906801.098:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.162336 kernel: audit: type=1130 audit(1719906801.102:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.064911 systemd-resolved[185]: Defaulting to hostname 'linux'. Jul 2 07:53:21.166520 kernel: SCSI subsystem initialized Jul 2 07:53:21.066104 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:21.093633 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:21.096385 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:53:21.098709 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:53:21.102272 systemd-modules-load[184]: Inserted module 'br_netfilter' Jul 2 07:53:21.102709 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:21.109151 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:53:21.195478 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:53:21.195499 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:53:21.195518 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:53:21.199022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:21.204704 systemd-modules-load[184]: Inserted module 'dm_multipath' Jul 2 07:53:21.205497 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:21.212286 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:21.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.224131 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:21.229502 kernel: audit: type=1130 audit(1719906801.211:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.231943 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:21.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.257137 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:53:21.263735 kernel: audit: type=1130 audit(1719906801.231:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.263764 kernel: audit: type=1130 audit(1719906801.233:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.259398 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:53:21.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.277303 dracut-cmdline[205]: dracut-dracut-053 Jul 2 07:53:21.277303 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:21.292600 kernel: audit: type=1130 audit(1719906801.258:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.332189 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:53:21.351198 kernel: iscsi: registered transport (tcp) Jul 2 07:53:21.377394 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:53:21.377450 kernel: QLogic iSCSI HBA Driver Jul 2 07:53:21.407252 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:53:21.410436 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:53:21.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.465192 kernel: raid6: avx512x4 gen() 18303 MB/s Jul 2 07:53:21.484187 kernel: raid6: avx512x4 xor() 7229 MB/s Jul 2 07:53:21.504183 kernel: raid6: avx512x2 gen() 18366 MB/s Jul 2 07:53:21.524189 kernel: raid6: avx512x2 xor() 29808 MB/s Jul 2 07:53:21.544182 kernel: raid6: avx512x1 gen() 18213 MB/s Jul 2 07:53:21.564181 kernel: raid6: avx512x1 xor() 27198 MB/s Jul 2 07:53:21.585184 kernel: raid6: avx2x4 gen() 18303 MB/s Jul 2 07:53:21.605183 kernel: raid6: avx2x4 xor() 6788 MB/s Jul 2 07:53:21.625181 kernel: raid6: avx2x2 gen() 18149 MB/s Jul 2 07:53:21.646186 kernel: raid6: avx2x2 xor() 22259 MB/s Jul 2 07:53:21.666182 kernel: raid6: avx2x1 gen() 13847 MB/s Jul 2 07:53:21.686182 kernel: raid6: avx2x1 xor() 19484 MB/s Jul 2 07:53:21.706183 kernel: raid6: sse2x4 gen() 11709 MB/s Jul 2 07:53:21.726182 kernel: raid6: sse2x4 xor() 6059 MB/s Jul 2 07:53:21.745194 kernel: raid6: sse2x2 gen() 12977 MB/s Jul 2 07:53:21.765183 kernel: raid6: sse2x2 xor() 7563 MB/s Jul 2 07:53:21.785181 kernel: raid6: sse2x1 gen() 11680 MB/s Jul 2 07:53:21.807732 kernel: raid6: sse2x1 xor() 5925 MB/s Jul 2 07:53:21.807761 kernel: raid6: using algorithm avx512x2 gen() 18366 MB/s Jul 2 07:53:21.807773 kernel: raid6: .... xor() 29808 MB/s, rmw enabled Jul 2 07:53:21.810816 kernel: raid6: using avx512x2 recovery algorithm Jul 2 07:53:21.830196 kernel: xor: automatically using best checksumming function avx Jul 2 07:53:21.925197 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:53:21.933467 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:53:21.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.937000 audit: BPF prog-id=7 op=LOAD Jul 2 07:53:21.937000 audit: BPF prog-id=8 op=LOAD Jul 2 07:53:21.938433 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:21.953084 systemd-udevd[383]: Using default interface naming scheme 'v252'. Jul 2 07:53:21.959954 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:21.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:21.963085 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:53:21.981855 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation Jul 2 07:53:22.011387 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:53:22.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:22.014662 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:22.051562 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:22.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:22.101189 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:53:22.125641 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 07:53:22.145188 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 07:53:22.157718 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 07:53:22.157773 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jul 2 07:53:22.159182 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 07:53:22.174187 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:53:22.174227 kernel: AES CTR mode by8 optimization enabled Jul 2 07:53:22.180106 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 07:53:22.189801 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 07:53:22.189837 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 07:53:22.189851 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jul 2 07:53:22.206704 kernel: scsi host1: storvsc_host_t Jul 2 07:53:22.206953 kernel: scsi host0: storvsc_host_t Jul 2 07:53:22.214189 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 07:53:22.219183 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 07:53:22.249763 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 07:53:22.250071 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:53:22.257899 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 07:53:22.258060 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 07:53:22.258193 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 07:53:22.258316 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 07:53:22.265739 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 07:53:22.265902 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 07:53:22.275185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:22.281107 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 07:53:22.418351 kernel: hv_netvsc 000d3add-8a54-000d-3add-8a54000d3add eth0: VF slot 1 added Jul 2 07:53:22.428195 kernel: hv_vmbus: registering driver hv_pci Jul 2 07:53:22.428248 kernel: hv_pci f959c523-5d54-4b95-8fb8-4c2224d1fac4: PCI VMBus probing: Using version 0x10004 Jul 2 07:53:22.443955 kernel: hv_pci f959c523-5d54-4b95-8fb8-4c2224d1fac4: PCI host bridge to bus 5d54:00 Jul 2 07:53:22.444139 kernel: pci_bus 5d54:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 07:53:22.444285 kernel: pci_bus 5d54:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 07:53:22.453293 kernel: pci 5d54:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 07:53:22.461825 kernel: pci 5d54:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:53:22.478188 kernel: pci 5d54:00:02.0: enabling Extended Tags Jul 2 07:53:22.496683 kernel: pci 5d54:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5d54:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 07:53:22.496915 kernel: pci_bus 5d54:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 07:53:22.497038 kernel: pci 5d54:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:53:22.597199 kernel: mlx5_core 5d54:00:02.0: firmware version: 14.30.1284 Jul 2 07:53:22.691640 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:53:22.757195 kernel: mlx5_core 5d54:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 07:53:22.757441 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (436) Jul 2 07:53:22.776336 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:22.857372 kernel: mlx5_core 5d54:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jul 2 07:53:22.857600 kernel: mlx5_core 5d54:00:02.0: mlx5e_tc_post_act_init:40:(pid 189): firmware level support is missing Jul 2 07:53:22.867450 kernel: hv_netvsc 000d3add-8a54-000d-3add-8a54000d3add eth0: VF registering: eth1 Jul 2 07:53:22.867625 kernel: mlx5_core 5d54:00:02.0 eth1: joined to eth0 Jul 2 07:53:22.877191 kernel: mlx5_core 5d54:00:02.0 enP23892s1: renamed from eth1 Jul 2 07:53:22.928071 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:53:22.933477 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:53:22.945222 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:53:22.953838 systemd[1]: Starting disk-uuid.service... Jul 2 07:53:22.969197 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:22.986187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:23.980696 disk-uuid[559]: The operation has completed successfully. Jul 2 07:53:23.983324 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:53:24.053247 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:53:24.053367 systemd[1]: Finished disk-uuid.service. Jul 2 07:53:24.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.068113 systemd[1]: Starting verity-setup.service... Jul 2 07:53:24.105466 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:53:24.368256 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:53:24.375089 systemd[1]: Finished verity-setup.service. Jul 2 07:53:24.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.379642 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:53:24.454976 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:53:24.458206 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:53:24.458323 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:53:24.462296 systemd[1]: Starting ignition-setup.service... Jul 2 07:53:24.467015 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:53:24.486987 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:24.487031 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:24.487050 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:24.536145 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:53:24.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.540000 audit: BPF prog-id=9 op=LOAD Jul 2 07:53:24.541077 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:24.554003 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:53:24.569070 systemd-networkd[802]: lo: Link UP Jul 2 07:53:24.569212 systemd-networkd[802]: lo: Gained carrier Jul 2 07:53:24.572999 systemd-networkd[802]: Enumeration completed Jul 2 07:53:24.575266 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:24.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.579022 systemd[1]: Reached target network.target. Jul 2 07:53:24.583257 systemd[1]: Starting iscsiuio.service... Jul 2 07:53:24.587915 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:24.592040 systemd[1]: Started iscsiuio.service. Jul 2 07:53:24.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.596419 systemd[1]: Starting iscsid.service... Jul 2 07:53:24.601432 iscsid[809]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:24.601432 iscsid[809]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:53:24.601432 iscsid[809]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:53:24.601432 iscsid[809]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:53:24.601432 iscsid[809]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:24.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.632126 iscsid[809]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:53:24.606392 systemd[1]: Started iscsid.service. Jul 2 07:53:24.622998 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:53:24.635595 systemd[1]: Finished ignition-setup.service. Jul 2 07:53:24.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.643894 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:53:24.660724 kernel: mlx5_core 5d54:00:02.0 enP23892s1: Link up Jul 2 07:53:24.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.649315 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:53:24.654290 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:53:24.656528 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:24.660700 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:24.670514 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:53:24.679921 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:53:24.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:24.694028 kernel: hv_netvsc 000d3add-8a54-000d-3add-8a54000d3add eth0: Data path switched to VF: enP23892s1 Jul 2 07:53:24.694366 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:53:24.694577 systemd-networkd[802]: enP23892s1: Link UP Jul 2 07:53:24.696774 systemd-networkd[802]: eth0: Link UP Jul 2 07:53:24.698701 systemd-networkd[802]: eth0: Gained carrier Jul 2 07:53:24.704746 systemd-networkd[802]: enP23892s1: Gained carrier Jul 2 07:53:24.733260 systemd-networkd[802]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:53:26.576438 systemd-networkd[802]: eth0: Gained IPv6LL Jul 2 07:53:28.029734 ignition[817]: Ignition 2.14.0 Jul 2 07:53:28.029753 ignition[817]: Stage: fetch-offline Jul 2 07:53:28.029856 ignition[817]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:28.029912 ignition[817]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:28.118710 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:28.118923 ignition[817]: parsed url from cmdline: "" Jul 2 07:53:28.120338 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:53:28.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.118928 ignition[817]: no config URL provided Jul 2 07:53:28.152541 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 07:53:28.152597 kernel: audit: type=1130 audit(1719906808.123:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.124846 systemd[1]: Starting ignition-fetch.service... Jul 2 07:53:28.118936 ignition[817]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:53:28.118946 ignition[817]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:53:28.118952 ignition[817]: failed to fetch config: resource requires networking Jul 2 07:53:28.119370 ignition[817]: Ignition finished successfully Jul 2 07:53:28.151579 ignition[830]: Ignition 2.14.0 Jul 2 07:53:28.151585 ignition[830]: Stage: fetch Jul 2 07:53:28.151689 ignition[830]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:28.151711 ignition[830]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:28.155334 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:28.156448 ignition[830]: parsed url from cmdline: "" Jul 2 07:53:28.156452 ignition[830]: no config URL provided Jul 2 07:53:28.156460 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:53:28.156475 ignition[830]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:53:28.156516 ignition[830]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 07:53:28.244241 ignition[830]: GET result: OK Jul 2 07:53:28.244397 ignition[830]: config has been read from IMDS userdata Jul 2 07:53:28.244438 ignition[830]: parsing config with SHA512: 5aca04313eb92088e8588e17273316382bd40d0d9eeab0189380a86dda6428d882bbf475a77f3fac25d708fb7e78994e541417f4391a7a03c22db0c13c4bbade Jul 2 07:53:28.248760 unknown[830]: fetched base config from "system" Jul 2 07:53:28.249512 ignition[830]: fetch: fetch complete Jul 2 07:53:28.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.248771 unknown[830]: fetched base config from "system" Jul 2 07:53:28.269396 kernel: audit: type=1130 audit(1719906808.255:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.249520 ignition[830]: fetch: fetch passed Jul 2 07:53:28.248778 unknown[830]: fetched user config from "azure" Jul 2 07:53:28.249573 ignition[830]: Ignition finished successfully Jul 2 07:53:28.251365 systemd[1]: Finished ignition-fetch.service. Jul 2 07:53:28.282916 ignition[836]: Ignition 2.14.0 Jul 2 07:53:28.272595 systemd[1]: Starting ignition-kargs.service... Jul 2 07:53:28.282923 ignition[836]: Stage: kargs Jul 2 07:53:28.283037 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:28.283060 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:28.291078 systemd[1]: Finished ignition-kargs.service. Jul 2 07:53:28.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.287819 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:28.312673 kernel: audit: type=1130 audit(1719906808.293:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.294625 systemd[1]: Starting ignition-disks.service... Jul 2 07:53:28.289368 ignition[836]: kargs: kargs passed Jul 2 07:53:28.289427 ignition[836]: Ignition finished successfully Jul 2 07:53:28.315367 ignition[842]: Ignition 2.14.0 Jul 2 07:53:28.315374 ignition[842]: Stage: disks Jul 2 07:53:28.315496 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:28.315521 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:28.325088 systemd[1]: Finished ignition-disks.service. Jul 2 07:53:28.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.320846 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:28.344141 kernel: audit: type=1130 audit(1719906808.326:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.327042 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:53:28.322683 ignition[842]: disks: disks passed Jul 2 07:53:28.339518 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:28.322732 ignition[842]: Ignition finished successfully Jul 2 07:53:28.344175 systemd[1]: Reached target local-fs.target. Jul 2 07:53:28.347880 systemd[1]: Reached target sysinit.target. Jul 2 07:53:28.360112 systemd[1]: Reached target basic.target. Jul 2 07:53:28.364618 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:53:28.427185 systemd-fsck[850]: ROOT: clean, 614/7326000 files, 481076/7359488 blocks Jul 2 07:53:28.452352 kernel: audit: type=1130 audit(1719906808.435:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.433519 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:53:28.450240 systemd[1]: Mounting sysroot.mount... Jul 2 07:53:28.471195 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:53:28.471554 systemd[1]: Mounted sysroot.mount. Jul 2 07:53:28.475162 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:53:28.510257 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:53:28.516732 systemd[1]: Starting flatcar-metadata-hostname.service... Jul 2 07:53:28.521518 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:53:28.521558 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:53:28.528547 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:53:28.592212 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:53:28.598197 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:53:28.609198 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (861) Jul 2 07:53:28.617798 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:28.617844 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:28.617856 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:28.625252 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:53:28.629753 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:53:28.661962 initrd-setup-root[892]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:53:28.687849 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:53:28.695055 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:53:29.187814 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:53:29.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.193081 systemd[1]: Starting ignition-mount.service... Jul 2 07:53:29.210556 kernel: audit: type=1130 audit(1719906809.192:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.207751 systemd[1]: Starting sysroot-boot.service... Jul 2 07:53:29.211804 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:29.211898 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:29.233836 systemd[1]: Finished sysroot-boot.service. Jul 2 07:53:29.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.247376 kernel: audit: type=1130 audit(1719906809.235:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.292878 ignition[929]: INFO : Ignition 2.14.0 Jul 2 07:53:29.292878 ignition[929]: INFO : Stage: mount Jul 2 07:53:29.296632 ignition[929]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:29.296632 ignition[929]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:29.308106 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:29.311140 ignition[929]: INFO : mount: mount passed Jul 2 07:53:29.311140 ignition[929]: INFO : Ignition finished successfully Jul 2 07:53:29.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.311952 systemd[1]: Finished ignition-mount.service. Jul 2 07:53:29.328872 kernel: audit: type=1130 audit(1719906809.314:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:30.099331 coreos-metadata[860]: Jul 02 07:53:30.099 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 07:53:30.118730 coreos-metadata[860]: Jul 02 07:53:30.118 INFO Fetch successful Jul 2 07:53:30.152560 coreos-metadata[860]: Jul 02 07:53:30.152 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 07:53:30.170305 coreos-metadata[860]: Jul 02 07:53:30.170 INFO Fetch successful Jul 2 07:53:30.188162 coreos-metadata[860]: Jul 02 07:53:30.188 INFO wrote hostname ci-3510.3.5-a-61dd50c322 to /sysroot/etc/hostname Jul 2 07:53:30.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:30.190473 systemd[1]: Finished flatcar-metadata-hostname.service. Jul 2 07:53:30.209035 kernel: audit: type=1130 audit(1719906810.195:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:30.196417 systemd[1]: Starting ignition-files.service... Jul 2 07:53:30.215594 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:53:30.233193 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (939) Jul 2 07:53:30.233230 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:30.240273 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:53:30.240293 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:53:30.250459 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:53:30.262930 ignition[958]: INFO : Ignition 2.14.0 Jul 2 07:53:30.262930 ignition[958]: INFO : Stage: files Jul 2 07:53:30.266771 ignition[958]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:30.266771 ignition[958]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:30.278261 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:30.299230 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:53:30.302562 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:53:30.302562 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:53:30.360313 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:53:30.364596 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:53:30.383833 unknown[958]: wrote ssh authorized keys file for user: core Jul 2 07:53:30.386567 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:53:30.407402 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.412969 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:53:30.495249 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:53:30.641817 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.646984 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:53:30.650929 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:53:31.261297 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:53:31.406187 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 07:53:31.411432 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:53:31.477295 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (961) Jul 2 07:53:31.477332 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4203383556" Jul 2 07:53:31.477332 ignition[958]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4203383556": device or resource busy Jul 2 07:53:31.477332 ignition[958]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4203383556", trying btrfs: device or resource busy Jul 2 07:53:31.477332 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4203383556" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4203383556" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem4203383556" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem4203383556" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 07:53:31.497280 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:53:31.488260 systemd[1]: mnt-oem4203383556.mount: Deactivated successfully. Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1531912631" Jul 2 07:53:31.524489 ignition[958]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1531912631": device or resource busy Jul 2 07:53:31.524489 ignition[958]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1531912631", trying btrfs: device or resource busy Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1531912631" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1531912631" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1531912631" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1531912631" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:31.524489 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:53:31.509624 systemd[1]: mnt-oem1531912631.mount: Deactivated successfully. Jul 2 07:53:32.010755 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): GET result: OK Jul 2 07:53:32.378894 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:32.378894 ignition[958]: INFO : files: op(14): [started] processing unit "waagent.service" Jul 2 07:53:32.378894 ignition[958]: INFO : files: op(14): [finished] processing unit "waagent.service" Jul 2 07:53:32.378894 ignition[958]: INFO : files: op(15): [started] processing unit "nvidia.service" Jul 2 07:53:32.378894 ignition[958]: INFO : files: op(15): [finished] processing unit "nvidia.service" Jul 2 07:53:32.378894 ignition[958]: INFO : files: op(16): [started] processing unit "prepare-helm.service" Jul 2 07:53:32.411575 kernel: audit: type=1130 audit(1719906812.391:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.387622 systemd[1]: Finished ignition-files.service. Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(16): op(17): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(16): op(17): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(16): [finished] processing unit "prepare-helm.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(19): [started] setting preset to enabled for "waagent.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(19): [finished] setting preset to enabled for "waagent.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Jul 2 07:53:32.413678 ignition[958]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:32.413678 ignition[958]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:32.413678 ignition[958]: INFO : files: files passed Jul 2 07:53:32.413678 ignition[958]: INFO : Ignition finished successfully Jul 2 07:53:32.404147 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:53:32.408305 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:53:32.465872 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:53:32.431971 systemd[1]: Starting ignition-quench.service... Jul 2 07:53:32.471481 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:53:32.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.475621 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:53:32.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.475711 systemd[1]: Finished ignition-quench.service. Jul 2 07:53:32.479730 systemd[1]: Reached target ignition-complete.target. Jul 2 07:53:32.484683 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:53:32.500525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:53:32.500610 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:53:32.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.506709 systemd[1]: Reached target initrd-fs.target. Jul 2 07:53:32.510354 systemd[1]: Reached target initrd.target. Jul 2 07:53:32.513895 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:53:32.517562 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:53:32.528305 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:53:32.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.533353 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:53:32.543046 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:53:32.545019 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:53:32.550765 systemd[1]: Stopped target timers.target. Jul 2 07:53:32.552608 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:53:32.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.552742 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:53:32.562445 systemd[1]: Stopped target initrd.target. Jul 2 07:53:32.569459 systemd[1]: Stopped target basic.target. Jul 2 07:53:32.573062 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:53:32.577340 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:53:32.581330 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:53:32.585528 systemd[1]: Stopped target remote-fs.target. Jul 2 07:53:32.589020 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:53:32.592726 systemd[1]: Stopped target sysinit.target. Jul 2 07:53:32.596425 systemd[1]: Stopped target local-fs.target. Jul 2 07:53:32.599934 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:53:32.603727 systemd[1]: Stopped target swap.target. Jul 2 07:53:32.607159 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:53:32.609371 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:53:32.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.613096 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:53:32.616744 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:53:32.618862 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:53:32.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.622494 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:53:32.625128 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:53:32.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.629533 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:53:32.631677 systemd[1]: Stopped ignition-files.service. Jul 2 07:53:32.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.635242 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 07:53:32.637068 systemd[1]: Stopped flatcar-metadata-hostname.service. Jul 2 07:53:32.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.643115 systemd[1]: Stopping ignition-mount.service... Jul 2 07:53:32.646146 systemd[1]: Stopping iscsiuio.service... Jul 2 07:53:32.650852 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:53:32.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.658245 ignition[996]: INFO : Ignition 2.14.0 Jul 2 07:53:32.658245 ignition[996]: INFO : Stage: umount Jul 2 07:53:32.658245 ignition[996]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:53:32.658245 ignition[996]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Jul 2 07:53:32.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.652611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:53:32.676016 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:53:32.676016 ignition[996]: INFO : umount: umount passed Jul 2 07:53:32.676016 ignition[996]: INFO : Ignition finished successfully Jul 2 07:53:32.652795 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:53:32.658209 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:53:32.658361 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:53:32.689334 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:53:32.691493 systemd[1]: Stopped iscsiuio.service. Jul 2 07:53:32.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.695280 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:53:32.697291 systemd[1]: Stopped ignition-mount.service. Jul 2 07:53:32.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.701108 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:53:32.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.701278 systemd[1]: Stopped ignition-disks.service. Jul 2 07:53:32.705411 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:53:32.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.705464 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:53:32.710973 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:53:32.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.711025 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:53:32.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.716451 systemd[1]: Stopped target network.target. Jul 2 07:53:32.718600 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:53:32.718653 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:53:32.722439 systemd[1]: Stopped target paths.target. Jul 2 07:53:32.724216 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:53:32.729222 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:53:32.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.733260 systemd[1]: Stopped target slices.target. Jul 2 07:53:32.734971 systemd[1]: Stopped target sockets.target. Jul 2 07:53:32.738293 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:53:32.738320 systemd[1]: Closed iscsid.socket. Jul 2 07:53:32.739827 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:53:32.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.739868 systemd[1]: Closed iscsiuio.socket. Jul 2 07:53:32.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.743847 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:53:32.743900 systemd[1]: Stopped ignition-setup.service. Jul 2 07:53:32.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.748043 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:53:32.775000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:53:32.751263 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:53:32.755232 systemd-networkd[802]: eth0: DHCPv6 lease lost Jul 2 07:53:32.781000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:53:32.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.757133 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:53:32.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.757705 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:53:32.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.757791 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:53:32.762918 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:53:32.763016 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:53:32.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.768746 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:53:32.768825 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:53:32.774369 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:53:32.774397 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:53:32.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.779716 systemd[1]: Stopping network-cleanup.service... Jul 2 07:53:32.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.781893 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:53:32.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.781948 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:53:32.785617 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:53:32.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.785672 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:53:32.789923 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:53:32.789972 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:53:32.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.794352 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:53:32.800828 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:53:32.861380 kernel: hv_netvsc 000d3add-8a54-000d-3add-8a54000d3add eth0: Data path switched from VF: enP23892s1 Jul 2 07:53:32.801365 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:53:32.801491 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:53:32.807816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:53:32.807868 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:53:32.813337 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:53:32.813383 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:53:32.817179 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:53:32.817249 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:53:32.819079 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:53:32.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:32.819128 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:53:32.823115 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:53:32.823165 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:53:32.828312 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:53:32.832255 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:53:32.832316 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:53:32.834774 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:53:32.834827 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:53:32.837069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:53:32.837104 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:53:32.846344 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:53:32.846980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:53:32.847079 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:53:32.880100 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:53:32.880223 systemd[1]: Stopped network-cleanup.service. Jul 2 07:53:33.583927 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:53:33.584082 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:53:33.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:33.588460 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:53:33.607777 kernel: kauditd_printk_skb: 40 callbacks suppressed Jul 2 07:53:33.607806 kernel: audit: type=1131 audit(1719906813.588:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:33.607713 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:53:33.607788 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:53:33.610574 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:53:33.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:33.625356 systemd[1]: Switching root. Jul 2 07:53:33.633485 kernel: audit: type=1131 audit(1719906813.609:80): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:33.651070 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jul 2 07:53:33.651143 iscsid[809]: iscsid shutting down. Jul 2 07:53:33.653085 systemd-journald[183]: Journal stopped Jul 2 07:53:49.201494 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:53:49.201537 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:53:49.201555 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:53:49.201572 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:53:49.201586 kernel: SELinux: policy capability open_perms=1 Jul 2 07:53:49.201605 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:53:49.201623 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:53:49.201647 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:53:49.201660 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:53:49.201673 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:53:49.201686 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:53:49.201698 kernel: audit: type=1403 audit(1719906816.129:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:53:49.201715 systemd[1]: Successfully loaded SELinux policy in 350.820ms. Jul 2 07:53:49.201729 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.271ms. Jul 2 07:53:49.206339 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:49.206362 systemd[1]: Detected virtualization microsoft. Jul 2 07:53:49.206379 systemd[1]: Detected architecture x86-64. Jul 2 07:53:49.206395 systemd[1]: Detected first boot. Jul 2 07:53:49.206415 systemd[1]: Hostname set to . Jul 2 07:53:49.206430 systemd[1]: Initializing machine ID from random generator. Jul 2 07:53:49.206442 kernel: audit: type=1400 audit(1719906816.895:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:49.206455 kernel: audit: type=1400 audit(1719906816.912:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:49.206468 kernel: audit: type=1400 audit(1719906816.912:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:49.206479 kernel: audit: type=1334 audit(1719906816.935:85): prog-id=10 op=LOAD Jul 2 07:53:49.206488 kernel: audit: type=1334 audit(1719906816.935:86): prog-id=10 op=UNLOAD Jul 2 07:53:49.206503 kernel: audit: type=1334 audit(1719906816.944:87): prog-id=11 op=LOAD Jul 2 07:53:49.206514 kernel: audit: type=1334 audit(1719906816.944:88): prog-id=11 op=UNLOAD Jul 2 07:53:49.206524 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:53:49.206536 kernel: audit: type=1400 audit(1719906818.644:89): avc: denied { associate } for pid=1031 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:53:49.206547 kernel: audit: type=1300 audit(1719906818.644:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:49.206558 kernel: audit: type=1327 audit(1719906818.644:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:49.206570 kernel: audit: type=1400 audit(1719906818.652:90): avc: denied { associate } for pid=1031 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:53:49.206585 kernel: audit: type=1300 audit(1719906818.652:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:49.206595 kernel: audit: type=1307 audit(1719906818.652:90): cwd="/" Jul 2 07:53:49.206607 kernel: audit: type=1302 audit(1719906818.652:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.206617 kernel: audit: type=1302 audit(1719906818.652:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:49.206628 kernel: audit: type=1327 audit(1719906818.652:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:49.206638 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:53:49.206653 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:49.206665 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:49.206681 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:49.206694 kernel: audit: type=1334 audit(1719906828.712:91): prog-id=12 op=LOAD Jul 2 07:53:49.206704 kernel: audit: type=1334 audit(1719906828.712:92): prog-id=3 op=UNLOAD Jul 2 07:53:49.206715 kernel: audit: type=1334 audit(1719906828.722:93): prog-id=13 op=LOAD Jul 2 07:53:49.206726 kernel: audit: type=1334 audit(1719906828.722:94): prog-id=14 op=LOAD Jul 2 07:53:49.206737 kernel: audit: type=1334 audit(1719906828.722:95): prog-id=4 op=UNLOAD Jul 2 07:53:49.206752 kernel: audit: type=1334 audit(1719906828.722:96): prog-id=5 op=UNLOAD Jul 2 07:53:49.206763 kernel: audit: type=1334 audit(1719906828.727:97): prog-id=15 op=LOAD Jul 2 07:53:49.206772 kernel: audit: type=1334 audit(1719906828.727:98): prog-id=12 op=UNLOAD Jul 2 07:53:49.206783 kernel: audit: type=1334 audit(1719906828.732:99): prog-id=16 op=LOAD Jul 2 07:53:49.206793 kernel: audit: type=1334 audit(1719906828.737:100): prog-id=17 op=LOAD Jul 2 07:53:49.206805 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:53:49.206815 systemd[1]: Stopped iscsid.service. Jul 2 07:53:49.206827 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:53:49.206842 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:53:49.206851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:53:49.206861 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:53:49.206871 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:53:49.206884 systemd[1]: Created slice system-getty.slice. Jul 2 07:53:49.206895 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:53:49.206906 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:53:49.206917 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:53:49.206930 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:53:49.206942 systemd[1]: Created slice user.slice. Jul 2 07:53:49.206953 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:49.206964 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:53:49.206977 systemd[1]: Set up automount boot.automount. Jul 2 07:53:49.206987 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:53:49.206999 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:53:49.207011 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:53:49.207022 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:53:49.207039 systemd[1]: Reached target integritysetup.target. Jul 2 07:53:49.207050 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:49.207062 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:49.207074 systemd[1]: Reached target slices.target. Jul 2 07:53:49.207085 systemd[1]: Reached target swap.target. Jul 2 07:53:49.207098 systemd[1]: Reached target torcx.target. Jul 2 07:53:49.207110 systemd[1]: Reached target veritysetup.target. Jul 2 07:53:49.207120 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:53:49.207135 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:53:49.207149 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:49.207158 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:49.207191 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:49.207207 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:53:49.207220 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:53:49.207230 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:53:49.207242 systemd[1]: Mounting media.mount... Jul 2 07:53:49.207255 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:49.207266 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:53:49.207278 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:53:49.207290 systemd[1]: Mounting tmp.mount... Jul 2 07:53:49.207300 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:53:49.207317 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:49.207328 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:49.207340 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:53:49.207351 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:49.207363 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:49.207375 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:49.207386 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:53:49.207398 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:49.207410 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:53:49.207424 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:53:49.207436 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:53:49.207448 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:53:49.207460 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:53:49.207471 systemd[1]: Stopped systemd-journald.service. Jul 2 07:53:49.207483 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:49.207495 kernel: loop: module loaded Jul 2 07:53:49.207506 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:49.207520 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:53:49.207533 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:53:49.207543 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:49.207555 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:53:49.207570 systemd[1]: Stopped verity-setup.service. Jul 2 07:53:49.207580 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:49.207592 kernel: fuse: init (API version 7.34) Jul 2 07:53:49.207606 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:53:49.207618 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:53:49.207638 systemd[1]: Mounted media.mount. Jul 2 07:53:49.207650 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:53:49.207663 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:53:49.207681 systemd-journald[1140]: Journal started Jul 2 07:53:49.207740 systemd-journald[1140]: Runtime Journal (/run/log/journal/3015e951674c4b8486ab0d4c9c12888a) is 8.0M, max 159.0M, 151.0M free. Jul 2 07:53:36.129000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:53:36.895000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:36.912000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:36.912000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:36.935000 audit: BPF prog-id=10 op=LOAD Jul 2 07:53:36.935000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:53:36.944000 audit: BPF prog-id=11 op=LOAD Jul 2 07:53:36.944000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:53:38.644000 audit[1031]: AVC avc: denied { associate } for pid=1031 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:53:38.644000 audit[1031]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:38.644000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:38.652000 audit[1031]: AVC avc: denied { associate } for pid=1031 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:53:38.652000 audit[1031]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1014 pid=1031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:38.652000 audit: CWD cwd="/" Jul 2 07:53:38.652000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:38.652000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:38.652000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:53:48.712000 audit: BPF prog-id=12 op=LOAD Jul 2 07:53:48.712000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:53:48.722000 audit: BPF prog-id=13 op=LOAD Jul 2 07:53:48.722000 audit: BPF prog-id=14 op=LOAD Jul 2 07:53:48.722000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:53:48.722000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:53:48.727000 audit: BPF prog-id=15 op=LOAD Jul 2 07:53:48.727000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:53:48.732000 audit: BPF prog-id=16 op=LOAD Jul 2 07:53:48.737000 audit: BPF prog-id=17 op=LOAD Jul 2 07:53:48.737000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:53:48.737000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:53:48.746000 audit: BPF prog-id=18 op=LOAD Jul 2 07:53:48.746000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:53:48.759000 audit: BPF prog-id=19 op=LOAD Jul 2 07:53:48.759000 audit: BPF prog-id=20 op=LOAD Jul 2 07:53:48.759000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:53:48.759000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:53:48.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.775000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:53:48.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.103000 audit: BPF prog-id=21 op=LOAD Jul 2 07:53:49.103000 audit: BPF prog-id=22 op=LOAD Jul 2 07:53:49.103000 audit: BPF prog-id=23 op=LOAD Jul 2 07:53:49.103000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:53:49.103000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:53:49.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.198000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:53:49.198000 audit[1140]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc557e5f10 a2=4000 a3=7ffc557e5fac items=0 ppid=1 pid=1140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:49.198000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:53:48.710226 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:53:38.611348 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:53:48.760624 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:53:38.612021 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:53:38.612043 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:53:38.612086 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:53:38.612098 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:53:38.612150 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:53:38.612166 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:53:38.612428 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:53:38.612485 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:53:38.612503 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:53:38.628812 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:53:38.628856 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:53:38.628889 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:53:38.628903 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:53:38.628927 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:53:38.628940 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:53:47.508855 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:47.509108 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:47.509239 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:47.509418 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:53:47.509466 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:53:47.509524 /usr/lib/systemd/system-generators/torcx-generator[1031]: time="2024-07-02T07:53:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:53:49.216189 systemd[1]: Started systemd-journald.service. Jul 2 07:53:49.218164 systemd[1]: Mounted tmp.mount. Jul 2 07:53:49.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.220360 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:53:49.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.222720 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:49.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.225255 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:53:49.225424 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:53:49.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.227805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:49.228039 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:49.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.230440 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:49.230598 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:49.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.232734 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:49.232879 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:49.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.235673 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:53:49.235815 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:53:49.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.238106 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:49.238397 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:49.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.240777 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:53:49.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.243587 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:53:49.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.246600 systemd[1]: Reached target network-pre.target. Jul 2 07:53:49.249990 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:53:49.253522 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:53:49.255567 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:53:49.271990 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:53:49.275960 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:53:49.278215 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:49.279961 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:53:49.282827 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:49.284402 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:53:49.290791 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:49.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.293611 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:53:49.296157 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:53:49.300632 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:49.319093 systemd-journald[1140]: Time spent on flushing to /var/log/journal/3015e951674c4b8486ab0d4c9c12888a is 25.589ms for 1175 entries. Jul 2 07:53:49.319093 systemd-journald[1140]: System Journal (/var/log/journal/3015e951674c4b8486ab0d4c9c12888a) is 8.0M, max 2.6G, 2.6G free. Jul 2 07:53:49.478612 systemd-journald[1140]: Received client request to flush runtime journal. Jul 2 07:53:49.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:49.328713 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:53:49.479285 udevadm[1155]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:53:49.331290 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:53:49.339316 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:49.342647 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:53:49.379331 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:49.479907 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:53:49.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.043733 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:53:50.048419 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:50.412886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:50.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.943200 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:53:50.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:50.946000 audit: BPF prog-id=24 op=LOAD Jul 2 07:53:50.946000 audit: BPF prog-id=25 op=LOAD Jul 2 07:53:50.946000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:53:50.946000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:53:50.947611 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:50.965530 systemd-udevd[1159]: Using default interface naming scheme 'v252'. Jul 2 07:53:51.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.333000 audit: BPF prog-id=26 op=LOAD Jul 2 07:53:51.329846 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:51.335323 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:51.369833 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:53:51.421361 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:53:51.431000 audit[1161]: AVC avc: denied { confidentiality } for pid=1161 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:51.438202 kernel: hv_vmbus: registering driver hv_balloon Jul 2 07:53:51.442911 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 07:53:51.431000 audit[1161]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fcb460ba80 a1=f884 a2=7f2635f06bc5 a3=5 items=12 ppid=1159 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:51.431000 audit: CWD cwd="/" Jul 2 07:53:51.431000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=1 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=2 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=3 name=(null) inode=13139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=4 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=5 name=(null) inode=13140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=6 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=7 name=(null) inode=13141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=8 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=9 name=(null) inode=13142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=10 name=(null) inode=13138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PATH item=11 name=(null) inode=13143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:51.431000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:53:51.451000 audit: BPF prog-id=27 op=LOAD Jul 2 07:53:51.451000 audit: BPF prog-id=28 op=LOAD Jul 2 07:53:51.451000 audit: BPF prog-id=29 op=LOAD Jul 2 07:53:51.453006 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:53:51.478423 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 07:53:51.478494 kernel: hv_vmbus: registering driver hv_utils Jul 2 07:53:51.513187 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 07:53:51.536551 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 07:53:51.536660 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 07:53:51.543889 kernel: Console: switching to colour dummy device 80x25 Jul 2 07:53:51.542835 systemd[1]: Started systemd-userdbd.service. Jul 2 07:53:51.550929 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:53:51.551032 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 07:53:51.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:51.555100 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 07:53:51.555161 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 07:53:52.364573 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1174) Jul 2 07:53:52.431630 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jul 2 07:53:52.433517 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:52.510242 systemd-networkd[1170]: lo: Link UP Jul 2 07:53:52.510260 systemd-networkd[1170]: lo: Gained carrier Jul 2 07:53:52.510977 systemd-networkd[1170]: Enumeration completed Jul 2 07:53:52.511109 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:52.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.514665 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:53:52.517279 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:53:52.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.521296 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:53:52.536820 systemd-networkd[1170]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:52.590575 kernel: mlx5_core 5d54:00:02.0 enP23892s1: Link up Jul 2 07:53:52.614569 kernel: hv_netvsc 000d3add-8a54-000d-3add-8a54000d3add eth0: Data path switched to VF: enP23892s1 Jul 2 07:53:52.614881 systemd-networkd[1170]: enP23892s1: Link UP Jul 2 07:53:52.615042 systemd-networkd[1170]: eth0: Link UP Jul 2 07:53:52.615047 systemd-networkd[1170]: eth0: Gained carrier Jul 2 07:53:52.620880 systemd-networkd[1170]: enP23892s1: Gained carrier Jul 2 07:53:52.646668 systemd-networkd[1170]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:53:52.914750 lvm[1237]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:52.940755 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:53:52.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.943433 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:52.946851 systemd[1]: Starting lvm2-activation.service... Jul 2 07:53:52.951629 lvm[1238]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:52.970578 systemd[1]: Finished lvm2-activation.service. Jul 2 07:53:52.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:52.973494 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:52.976329 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:53:52.976385 systemd[1]: Reached target local-fs.target. Jul 2 07:53:52.980822 systemd[1]: Reached target machines.target. Jul 2 07:53:52.984226 systemd[1]: Starting ldconfig.service... Jul 2 07:53:52.986253 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:52.986357 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:52.987515 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:53:52.990708 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:53:52.994296 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:53:52.997447 systemd[1]: Starting systemd-sysext.service... Jul 2 07:53:53.221807 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1240 (bootctl) Jul 2 07:53:53.223451 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:53:53.357345 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:53:53.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:53.468106 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:53:53.561208 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:53.561449 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:53:53.723579 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:53:54.501009 systemd-networkd[1170]: eth0: Gained IPv6LL Jul 2 07:53:54.506524 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:53:54.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:54.511866 kernel: kauditd_printk_skb: 82 callbacks suppressed Jul 2 07:53:54.511928 kernel: audit: type=1130 audit(1719906834.508:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.277571 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:53:55.293602 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:53:55.304725 (sd-sysext)[1252]: Using extensions 'kubernetes'. Jul 2 07:53:55.305167 (sd-sysext)[1252]: Merged extensions into '/usr'. Jul 2 07:53:55.318031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:53:55.318664 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:53:55.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.322207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.327821 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:53:55.334734 kernel: audit: type=1130 audit(1719906835.317:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.333097 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.335158 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:55.338105 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:55.341174 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:55.343216 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.343388 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:55.343561 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.346168 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:53:55.348599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:55.348746 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:55.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.351425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:55.351541 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:55.360798 kernel: audit: type=1130 audit(1719906835.349:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.360868 kernel: audit: type=1131 audit(1719906835.349:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.371298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:55.371414 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:55.372031 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:55.372137 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.373861 systemd[1]: Finished systemd-sysext.service. Jul 2 07:53:55.376294 systemd[1]: Starting ensure-sysext.service... Jul 2 07:53:55.377739 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:53:55.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.400829 systemd[1]: Reloading. Jul 2 07:53:55.404880 kernel: audit: type=1130 audit(1719906835.369:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.404958 kernel: audit: type=1131 audit(1719906835.369:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.430602 kernel: audit: type=1130 audit(1719906835.370:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.430692 kernel: audit: type=1131 audit(1719906835.370:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.430373 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:53:55.431537 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:53:55.450615 kernel: audit: type=1130 audit(1719906835.370:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.454587 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:53:55.482988 systemd-fsck[1247]: fsck.fat 4.2 (2021-01-31) Jul 2 07:53:55.482988 systemd-fsck[1247]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:53:55.491235 /usr/lib/systemd/system-generators/torcx-generator[1278]: time="2024-07-02T07:53:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:53:55.491275 /usr/lib/systemd/system-generators/torcx-generator[1278]: time="2024-07-02T07:53:55Z" level=info msg="torcx already run" Jul 2 07:53:55.588321 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:55.588344 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:55.604577 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:55.668000 audit: BPF prog-id=30 op=LOAD Jul 2 07:53:55.674643 kernel: audit: type=1334 audit(1719906835.668:175): prog-id=30 op=LOAD Jul 2 07:53:55.668000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:53:55.672000 audit: BPF prog-id=31 op=LOAD Jul 2 07:53:55.672000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:53:55.673000 audit: BPF prog-id=32 op=LOAD Jul 2 07:53:55.673000 audit: BPF prog-id=33 op=LOAD Jul 2 07:53:55.673000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:53:55.673000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:53:55.674000 audit: BPF prog-id=34 op=LOAD Jul 2 07:53:55.674000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:53:55.674000 audit: BPF prog-id=35 op=LOAD Jul 2 07:53:55.674000 audit: BPF prog-id=36 op=LOAD Jul 2 07:53:55.674000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:53:55.674000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:53:55.676000 audit: BPF prog-id=37 op=LOAD Jul 2 07:53:55.676000 audit: BPF prog-id=38 op=LOAD Jul 2 07:53:55.676000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:53:55.676000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:53:55.681522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:53:55.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.691763 systemd[1]: Mounting boot.mount... Jul 2 07:53:55.698723 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.699074 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.700683 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:55.703974 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:55.707977 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:55.710120 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.710451 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:55.710856 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.713874 systemd[1]: Mounted boot.mount. Jul 2 07:53:55.715956 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:55.716109 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:55.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.718629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:55.718776 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:55.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.724699 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:55.724855 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:55.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.734478 systemd[1]: Finished ensure-sysext.service. Jul 2 07:53:55.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.737789 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.738119 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.739681 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:55.744287 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:55.747977 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:55.751187 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:55.753658 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:55.753719 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:55.753830 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:55.754325 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:53:55.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.756980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:55.757130 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:55.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.759491 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:55.759642 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:55.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.762024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:55.762164 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:55.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.764610 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:55.764750 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:55.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:55.766968 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:55.767015 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:56.064839 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:53:56.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.068704 systemd[1]: Starting audit-rules.service... Jul 2 07:53:56.071969 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:53:56.075468 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:53:56.079979 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:56.077000 audit: BPF prog-id=39 op=LOAD Jul 2 07:53:56.081000 audit: BPF prog-id=40 op=LOAD Jul 2 07:53:56.084047 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:53:56.087986 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:53:56.109000 audit[1359]: SYSTEM_BOOT pid=1359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.116279 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:53:56.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.134711 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:53:56.137317 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:56.222921 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:53:56.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.231425 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:53:56.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.233633 systemd[1]: Reached target time-set.target. Jul 2 07:53:56.268314 systemd-resolved[1356]: Positive Trust Anchors: Jul 2 07:53:56.268330 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:56.268369 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:56.390829 systemd-resolved[1356]: Using system hostname 'ci-3510.3.5-a-61dd50c322'. Jul 2 07:53:56.392597 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:56.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:56.395239 systemd[1]: Reached target network.target. Jul 2 07:53:56.397294 systemd[1]: Reached target network-online.target. Jul 2 07:53:56.399676 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:56.403000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:53:56.403000 audit[1374]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff35c2b260 a2=420 a3=0 items=0 ppid=1353 pid=1374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:56.405127 augenrules[1374]: No rules Jul 2 07:53:56.403000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:53:56.405743 systemd[1]: Finished audit-rules.service. Jul 2 07:53:56.434771 systemd-timesyncd[1357]: Contacted time server 188.125.64.7:123 (0.flatcar.pool.ntp.org). Jul 2 07:53:56.434869 systemd-timesyncd[1357]: Initial clock synchronization to Tue 2024-07-02 07:53:56.431377 UTC. Jul 2 07:54:02.172111 ldconfig[1239]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:54:02.181980 systemd[1]: Finished ldconfig.service. Jul 2 07:54:02.186575 systemd[1]: Starting systemd-update-done.service... Jul 2 07:54:02.193667 systemd[1]: Finished systemd-update-done.service. Jul 2 07:54:02.196171 systemd[1]: Reached target sysinit.target. Jul 2 07:54:02.198276 systemd[1]: Started motdgen.path. Jul 2 07:54:02.200061 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:54:02.202761 systemd[1]: Started logrotate.timer. Jul 2 07:54:02.204465 systemd[1]: Started mdadm.timer. Jul 2 07:54:02.206181 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:54:02.208242 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:54:02.208277 systemd[1]: Reached target paths.target. Jul 2 07:54:02.210151 systemd[1]: Reached target timers.target. Jul 2 07:54:02.212272 systemd[1]: Listening on dbus.socket. Jul 2 07:54:02.217425 systemd[1]: Starting docker.socket... Jul 2 07:54:02.222186 systemd[1]: Listening on sshd.socket. Jul 2 07:54:02.224354 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:54:02.224857 systemd[1]: Listening on docker.socket. Jul 2 07:54:02.226695 systemd[1]: Reached target sockets.target. Jul 2 07:54:02.228530 systemd[1]: Reached target basic.target. Jul 2 07:54:02.230673 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:54:02.230706 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:54:02.231737 systemd[1]: Starting containerd.service... Jul 2 07:54:02.234990 systemd[1]: Starting dbus.service... Jul 2 07:54:02.237700 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:54:02.241003 systemd[1]: Starting extend-filesystems.service... Jul 2 07:54:02.242819 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:54:02.244520 systemd[1]: Starting kubelet.service... Jul 2 07:54:02.249285 systemd[1]: Starting motdgen.service... Jul 2 07:54:02.252838 systemd[1]: Started nvidia.service. Jul 2 07:54:02.256393 systemd[1]: Starting prepare-helm.service... Jul 2 07:54:02.259510 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:54:02.262920 systemd[1]: Starting sshd-keygen.service... Jul 2 07:54:02.268283 systemd[1]: Starting systemd-logind.service... Jul 2 07:54:02.272644 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:54:02.272738 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:54:02.273261 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:54:02.274129 systemd[1]: Starting update-engine.service... Jul 2 07:54:02.277184 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:54:02.283503 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:54:02.283769 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:54:02.329856 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:54:02.330070 systemd[1]: Finished motdgen.service. Jul 2 07:54:02.363955 jq[1384]: false Jul 2 07:54:02.364253 jq[1400]: true Jul 2 07:54:02.365116 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:54:02.365323 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:54:02.390120 extend-filesystems[1385]: Found loop1 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda1 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda2 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda3 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found usr Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda4 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda6 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda7 Jul 2 07:54:02.392590 extend-filesystems[1385]: Found sda9 Jul 2 07:54:02.392590 extend-filesystems[1385]: Checking size of /dev/sda9 Jul 2 07:54:02.410038 jq[1412]: true Jul 2 07:54:02.419658 tar[1403]: linux-amd64/helm Jul 2 07:54:02.434052 env[1407]: time="2024-07-02T07:54:02.433957550Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:54:02.442943 systemd-logind[1395]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jul 2 07:54:02.447704 systemd-logind[1395]: New seat seat0. Jul 2 07:54:02.478002 extend-filesystems[1385]: Old size kept for /dev/sda9 Jul 2 07:54:02.489482 extend-filesystems[1385]: Found sr0 Jul 2 07:54:02.478790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:54:02.478954 systemd[1]: Finished extend-filesystems.service. Jul 2 07:54:02.548606 dbus-daemon[1383]: [system] SELinux support is enabled Jul 2 07:54:02.548833 systemd[1]: Started dbus.service. Jul 2 07:54:02.553804 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:54:02.553839 systemd[1]: Reached target system-config.target. Jul 2 07:54:02.555975 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:54:02.555997 systemd[1]: Reached target user-config.target. Jul 2 07:54:02.569329 systemd[1]: Started systemd-logind.service. Jul 2 07:54:02.571578 dbus-daemon[1383]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:54:02.573514 env[1407]: time="2024-07-02T07:54:02.573462268Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:54:02.573696 env[1407]: time="2024-07-02T07:54:02.573667538Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581025147Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581065441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581376395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581406591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581431687Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581450984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.582377 env[1407]: time="2024-07-02T07:54:02.581559468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.586914 env[1407]: time="2024-07-02T07:54:02.586880279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:54:02.587543 env[1407]: time="2024-07-02T07:54:02.587134941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:54:02.587543 env[1407]: time="2024-07-02T07:54:02.587169536Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:54:02.587543 env[1407]: time="2024-07-02T07:54:02.587242725Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:54:02.587543 env[1407]: time="2024-07-02T07:54:02.587258923Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:54:02.611311 env[1407]: time="2024-07-02T07:54:02.611273363Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:54:02.611401 env[1407]: time="2024-07-02T07:54:02.611318956Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:54:02.611401 env[1407]: time="2024-07-02T07:54:02.611338753Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:54:02.611401 env[1407]: time="2024-07-02T07:54:02.611385746Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611406543Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611425440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611445837Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611463935Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611483832Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611519 env[1407]: time="2024-07-02T07:54:02.611503629Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611749 env[1407]: time="2024-07-02T07:54:02.611522626Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.611749 env[1407]: time="2024-07-02T07:54:02.611540823Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:54:02.611749 env[1407]: time="2024-07-02T07:54:02.611676603Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:54:02.611911 env[1407]: time="2024-07-02T07:54:02.611772489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612095641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612143934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612163631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612224122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612242119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612260916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612277614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612294311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612311809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612328007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612344004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612365601Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:54:02.612517 env[1407]: time="2024-07-02T07:54:02.612508180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612528277Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612557372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612585068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612606965Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612621963Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612648459Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:54:02.613022 env[1407]: time="2024-07-02T07:54:02.612690953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:54:02.613262 env[1407]: time="2024-07-02T07:54:02.612967512Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:54:02.613262 env[1407]: time="2024-07-02T07:54:02.613042701Z" level=info msg="Connect containerd service" Jul 2 07:54:02.613262 env[1407]: time="2024-07-02T07:54:02.613087094Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.613898974Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.614903525Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.614966415Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615285468Z" level=info msg="Start subscribing containerd event" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615334061Z" level=info msg="Start recovering state" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615389153Z" level=info msg="Start event monitor" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615404050Z" level=info msg="Start snapshots syncer" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615414449Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.615422248Z" level=info msg="Start streaming server" Jul 2 07:54:02.645858 env[1407]: time="2024-07-02T07:54:02.634212762Z" level=info msg="containerd successfully booted in 0.224299s" Jul 2 07:54:02.615114 systemd[1]: Started containerd.service. Jul 2 07:54:02.654988 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 07:54:02.754535 bash[1440]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:54:02.755143 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:54:03.175683 tar[1403]: linux-amd64/LICENSE Jul 2 07:54:03.175912 tar[1403]: linux-amd64/README.md Jul 2 07:54:03.184779 systemd[1]: Finished prepare-helm.service. Jul 2 07:54:03.270331 update_engine[1398]: I0702 07:54:03.269927 1398 main.cc:92] Flatcar Update Engine starting Jul 2 07:54:03.340507 systemd[1]: Started update-engine.service. Jul 2 07:54:03.348435 update_engine[1398]: I0702 07:54:03.340587 1398 update_check_scheduler.cc:74] Next update check in 7m59s Jul 2 07:54:03.345606 systemd[1]: Started locksmithd.service. Jul 2 07:54:03.573162 systemd[1]: Started kubelet.service. Jul 2 07:54:04.174359 sshd_keygen[1399]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:54:04.205013 systemd[1]: Finished sshd-keygen.service. Jul 2 07:54:04.209375 systemd[1]: Starting issuegen.service... Jul 2 07:54:04.212612 systemd[1]: Started waagent.service. Jul 2 07:54:04.221076 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:54:04.221258 systemd[1]: Finished issuegen.service. Jul 2 07:54:04.225480 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:54:04.236264 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:54:04.240727 systemd[1]: Started getty@tty1.service. Jul 2 07:54:04.244638 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:54:04.247290 systemd[1]: Reached target getty.target. Jul 2 07:54:04.249402 systemd[1]: Reached target multi-user.target. Jul 2 07:54:04.255587 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:54:04.267148 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:54:04.267330 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:54:04.270315 systemd[1]: Startup finished in 915ms (firmware) + 30.423s (loader) + 880ms (kernel) + 14.865s (initrd) + 28.030s (userspace) = 1min 15.116s. Jul 2 07:54:04.342723 kubelet[1485]: E0702 07:54:04.342657 1485 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:04.344418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:04.344593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:04.344886 systemd[1]: kubelet.service: Consumed 1.146s CPU time. Jul 2 07:54:04.637527 login[1508]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:54:04.639248 login[1509]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:54:04.664360 systemd[1]: Created slice user-500.slice. Jul 2 07:54:04.665896 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:54:04.668805 systemd-logind[1395]: New session 2 of user core. Jul 2 07:54:04.677429 systemd-logind[1395]: New session 1 of user core. Jul 2 07:54:04.681295 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:54:04.683648 systemd[1]: Starting user@500.service... Jul 2 07:54:04.703387 (systemd)[1512]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:04.833109 systemd[1512]: Queued start job for default target default.target. Jul 2 07:54:04.833736 systemd[1512]: Reached target paths.target. Jul 2 07:54:04.833764 systemd[1512]: Reached target sockets.target. Jul 2 07:54:04.833781 systemd[1512]: Reached target timers.target. Jul 2 07:54:04.833796 systemd[1512]: Reached target basic.target. Jul 2 07:54:04.833929 systemd[1]: Started user@500.service. Jul 2 07:54:04.835217 systemd[1]: Started session-1.scope. Jul 2 07:54:04.836097 systemd[1]: Started session-2.scope. Jul 2 07:54:04.837084 systemd[1512]: Reached target default.target. Jul 2 07:54:04.837275 systemd[1512]: Startup finished in 127ms. Jul 2 07:54:04.899126 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:54:11.502046 waagent[1503]: 2024-07-02T07:54:11.501920Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Jul 2 07:54:11.505936 waagent[1503]: 2024-07-02T07:54:11.505855Z INFO Daemon Daemon OS: flatcar 3510.3.5 Jul 2 07:54:11.512714 waagent[1503]: 2024-07-02T07:54:11.507366Z INFO Daemon Daemon Python: 3.9.16 Jul 2 07:54:11.512714 waagent[1503]: 2024-07-02T07:54:11.508996Z INFO Daemon Daemon Run daemon Jul 2 07:54:11.512714 waagent[1503]: 2024-07-02T07:54:11.510520Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.5' Jul 2 07:54:11.522557 waagent[1503]: 2024-07-02T07:54:11.522421Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 07:54:11.529257 waagent[1503]: 2024-07-02T07:54:11.529153Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.530330Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.531023Z INFO Daemon Daemon Using waagent for provisioning Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.532388Z INFO Daemon Daemon Activate resource disk Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.532663Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.540407Z INFO Daemon Daemon Found device: None Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.541188Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.542044Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.543715Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.544817Z INFO Daemon Daemon Running default provisioning handler Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.554289Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.556817Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.557335Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 07:54:11.568393 waagent[1503]: 2024-07-02T07:54:11.558115Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 07:54:11.620310 waagent[1503]: 2024-07-02T07:54:11.619309Z INFO Daemon Daemon Successfully mounted dvd Jul 2 07:54:11.758296 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 07:54:11.781207 waagent[1503]: 2024-07-02T07:54:11.781068Z INFO Daemon Daemon Detect protocol endpoint Jul 2 07:54:11.795742 waagent[1503]: 2024-07-02T07:54:11.782522Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:54:11.795742 waagent[1503]: 2024-07-02T07:54:11.783519Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 07:54:11.795742 waagent[1503]: 2024-07-02T07:54:11.784462Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 07:54:11.795742 waagent[1503]: 2024-07-02T07:54:11.785592Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 07:54:11.795742 waagent[1503]: 2024-07-02T07:54:11.786341Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 07:54:11.919276 waagent[1503]: 2024-07-02T07:54:11.919195Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 07:54:11.926502 waagent[1503]: 2024-07-02T07:54:11.920958Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 07:54:11.926502 waagent[1503]: 2024-07-02T07:54:11.921536Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 07:54:12.645262 waagent[1503]: 2024-07-02T07:54:12.645089Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 07:54:12.657284 waagent[1503]: 2024-07-02T07:54:12.657201Z INFO Daemon Daemon Forcing an update of the goal state.. Jul 2 07:54:12.660288 waagent[1503]: 2024-07-02T07:54:12.660216Z INFO Daemon Daemon Fetching goal state [incarnation 1] Jul 2 07:54:12.747438 waagent[1503]: 2024-07-02T07:54:12.747290Z INFO Daemon Daemon Found private key matching thumbprint F8DDC07CD4C1A7B2B106457B40CA7538E6D2A7CA Jul 2 07:54:12.757048 waagent[1503]: 2024-07-02T07:54:12.748813Z INFO Daemon Daemon Certificate with thumbprint FE9204E387CD9E69DA467DF210F8821ABCC54446 has no matching private key. Jul 2 07:54:12.757048 waagent[1503]: 2024-07-02T07:54:12.749829Z INFO Daemon Daemon Fetch goal state completed Jul 2 07:54:12.796210 waagent[1503]: 2024-07-02T07:54:12.796114Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 58705628-7cc4-4126-9840-f366974c6732 New eTag: 11551044705065629214] Jul 2 07:54:12.803357 waagent[1503]: 2024-07-02T07:54:12.798206Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 07:54:12.808514 waagent[1503]: 2024-07-02T07:54:12.808451Z INFO Daemon Daemon Starting provisioning Jul 2 07:54:12.814734 waagent[1503]: 2024-07-02T07:54:12.809705Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 07:54:12.814734 waagent[1503]: 2024-07-02T07:54:12.810610Z INFO Daemon Daemon Set hostname [ci-3510.3.5-a-61dd50c322] Jul 2 07:54:12.829519 waagent[1503]: 2024-07-02T07:54:12.829405Z INFO Daemon Daemon Publish hostname [ci-3510.3.5-a-61dd50c322] Jul 2 07:54:12.836463 waagent[1503]: 2024-07-02T07:54:12.831030Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 07:54:12.836463 waagent[1503]: 2024-07-02T07:54:12.832255Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 07:54:12.845824 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Jul 2 07:54:12.846090 systemd[1]: Stopped systemd-networkd-wait-online.service. Jul 2 07:54:12.846175 systemd[1]: Stopping systemd-networkd-wait-online.service... Jul 2 07:54:12.846539 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:54:12.850605 systemd-networkd[1170]: eth0: DHCPv6 lease lost Jul 2 07:54:12.852206 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:54:12.852415 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:54:12.855019 systemd[1]: Starting systemd-networkd.service... Jul 2 07:54:12.887008 systemd-networkd[1558]: enP23892s1: Link UP Jul 2 07:54:12.887018 systemd-networkd[1558]: enP23892s1: Gained carrier Jul 2 07:54:12.888356 systemd-networkd[1558]: eth0: Link UP Jul 2 07:54:12.888365 systemd-networkd[1558]: eth0: Gained carrier Jul 2 07:54:12.888830 systemd-networkd[1558]: lo: Link UP Jul 2 07:54:12.888838 systemd-networkd[1558]: lo: Gained carrier Jul 2 07:54:12.889160 systemd-networkd[1558]: eth0: Gained IPv6LL Jul 2 07:54:12.889446 systemd-networkd[1558]: Enumeration completed Jul 2 07:54:12.889568 systemd[1]: Started systemd-networkd.service. Jul 2 07:54:12.892950 waagent[1503]: 2024-07-02T07:54:12.891584Z INFO Daemon Daemon Create user account if not exists Jul 2 07:54:12.891894 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:54:12.895301 waagent[1503]: 2024-07-02T07:54:12.893715Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 07:54:12.895301 waagent[1503]: 2024-07-02T07:54:12.894361Z INFO Daemon Daemon Configure sudoer Jul 2 07:54:12.895671 waagent[1503]: 2024-07-02T07:54:12.895614Z INFO Daemon Daemon Configure sshd Jul 2 07:54:12.896329 waagent[1503]: 2024-07-02T07:54:12.896278Z INFO Daemon Daemon Deploy ssh public key. Jul 2 07:54:12.907063 systemd-networkd[1558]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:54:12.953741 systemd-networkd[1558]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:54:12.957691 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:54:14.304960 waagent[1503]: 2024-07-02T07:54:14.304843Z INFO Daemon Daemon Provisioning complete Jul 2 07:54:14.322684 waagent[1503]: 2024-07-02T07:54:14.322588Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 07:54:14.325929 waagent[1503]: 2024-07-02T07:54:14.325844Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 07:54:14.331452 waagent[1503]: 2024-07-02T07:54:14.331385Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Jul 2 07:54:14.503057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:54:14.503340 systemd[1]: Stopped kubelet.service. Jul 2 07:54:14.503399 systemd[1]: kubelet.service: Consumed 1.146s CPU time. Jul 2 07:54:14.505384 systemd[1]: Starting kubelet.service... Jul 2 07:54:14.626124 systemd[1]: Started kubelet.service. Jul 2 07:54:14.643999 waagent[1567]: 2024-07-02T07:54:14.643909Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Jul 2 07:54:14.644809 waagent[1567]: 2024-07-02T07:54:14.644746Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:14.644964 waagent[1567]: 2024-07-02T07:54:14.644910Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:14.655979 waagent[1567]: 2024-07-02T07:54:14.655904Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Jul 2 07:54:14.656132 waagent[1567]: 2024-07-02T07:54:14.656082Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Jul 2 07:54:15.955585 waagent[1567]: 2024-07-02T07:54:15.955366Z INFO ExtHandler ExtHandler Found private key matching thumbprint F8DDC07CD4C1A7B2B106457B40CA7538E6D2A7CA Jul 2 07:54:15.956152 waagent[1567]: 2024-07-02T07:54:15.955921Z INFO ExtHandler ExtHandler Certificate with thumbprint FE9204E387CD9E69DA467DF210F8821ABCC54446 has no matching private key. Jul 2 07:54:15.956371 waagent[1567]: 2024-07-02T07:54:15.956296Z INFO ExtHandler ExtHandler Fetch goal state completed Jul 2 07:54:15.970424 waagent[1567]: 2024-07-02T07:54:15.970364Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: da322845-c68a-4fdd-9aea-6604faded938 New eTag: 11551044705065629214] Jul 2 07:54:15.970980 waagent[1567]: 2024-07-02T07:54:15.970921Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Jul 2 07:54:16.041288 kubelet[1574]: E0702 07:54:16.041228 1574 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:16.044699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:16.044879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:16.207249 waagent[1567]: 2024-07-02T07:54:16.206992Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 07:54:16.236783 waagent[1567]: 2024-07-02T07:54:16.236665Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1567 Jul 2 07:54:16.240646 waagent[1567]: 2024-07-02T07:54:16.240571Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 07:54:16.241896 waagent[1567]: 2024-07-02T07:54:16.241836Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 07:54:16.337452 waagent[1567]: 2024-07-02T07:54:16.337369Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 07:54:16.338005 waagent[1567]: 2024-07-02T07:54:16.337928Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 07:54:16.346787 waagent[1567]: 2024-07-02T07:54:16.346730Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 07:54:16.347268 waagent[1567]: 2024-07-02T07:54:16.347207Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 07:54:16.348357 waagent[1567]: 2024-07-02T07:54:16.348288Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Jul 2 07:54:16.349667 waagent[1567]: 2024-07-02T07:54:16.349607Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 07:54:16.350426 waagent[1567]: 2024-07-02T07:54:16.350369Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 07:54:16.350537 waagent[1567]: 2024-07-02T07:54:16.350474Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:16.350996 waagent[1567]: 2024-07-02T07:54:16.350942Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:16.351106 waagent[1567]: 2024-07-02T07:54:16.351050Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:16.352005 waagent[1567]: 2024-07-02T07:54:16.351949Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 07:54:16.352172 waagent[1567]: 2024-07-02T07:54:16.352121Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:16.352716 waagent[1567]: 2024-07-02T07:54:16.352655Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 07:54:16.353303 waagent[1567]: 2024-07-02T07:54:16.353245Z INFO EnvHandler ExtHandler Configure routes Jul 2 07:54:16.353448 waagent[1567]: 2024-07-02T07:54:16.353393Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 07:54:16.353558 waagent[1567]: 2024-07-02T07:54:16.353485Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 07:54:16.353558 waagent[1567]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 07:54:16.353558 waagent[1567]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 07:54:16.353558 waagent[1567]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 07:54:16.353558 waagent[1567]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:16.353558 waagent[1567]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:16.353558 waagent[1567]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:16.354333 waagent[1567]: 2024-07-02T07:54:16.354280Z INFO EnvHandler ExtHandler Gateway:None Jul 2 07:54:16.356446 waagent[1567]: 2024-07-02T07:54:16.356250Z INFO EnvHandler ExtHandler Routes:None Jul 2 07:54:16.358503 waagent[1567]: 2024-07-02T07:54:16.358438Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 07:54:16.358650 waagent[1567]: 2024-07-02T07:54:16.358574Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 07:54:16.359174 waagent[1567]: 2024-07-02T07:54:16.359121Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 07:54:16.371311 waagent[1567]: 2024-07-02T07:54:16.371264Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Jul 2 07:54:16.372329 waagent[1567]: 2024-07-02T07:54:16.372277Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 07:54:16.373252 waagent[1567]: 2024-07-02T07:54:16.373205Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Jul 2 07:54:16.400767 waagent[1567]: 2024-07-02T07:54:16.400660Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1558' Jul 2 07:54:16.423704 waagent[1567]: 2024-07-02T07:54:16.423639Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Jul 2 07:54:16.499977 waagent[1567]: 2024-07-02T07:54:16.499785Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 07:54:16.499977 waagent[1567]: Executing ['ip', '-a', '-o', 'link']: Jul 2 07:54:16.499977 waagent[1567]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 07:54:16.499977 waagent[1567]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:8a:54 brd ff:ff:ff:ff:ff:ff Jul 2 07:54:16.499977 waagent[1567]: 3: enP23892s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:8a:54 brd ff:ff:ff:ff:ff:ff\ altname enP23892p0s2 Jul 2 07:54:16.499977 waagent[1567]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 07:54:16.499977 waagent[1567]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 07:54:16.499977 waagent[1567]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 07:54:16.499977 waagent[1567]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 07:54:16.499977 waagent[1567]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 07:54:16.499977 waagent[1567]: 2: eth0 inet6 fe80::20d:3aff:fedd:8a54/64 scope link \ valid_lft forever preferred_lft forever Jul 2 07:54:16.754090 waagent[1567]: 2024-07-02T07:54:16.754023Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.11.1.4 -- exiting Jul 2 07:54:17.337168 waagent[1503]: 2024-07-02T07:54:17.336957Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Jul 2 07:54:17.341938 waagent[1503]: 2024-07-02T07:54:17.341876Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.11.1.4 to be the latest agent Jul 2 07:54:18.377130 waagent[1616]: 2024-07-02T07:54:18.377011Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.11.1.4) Jul 2 07:54:18.490665 waagent[1616]: 2024-07-02T07:54:18.490489Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.5 Jul 2 07:54:18.490935 waagent[1616]: 2024-07-02T07:54:18.490863Z INFO ExtHandler ExtHandler Python: 3.9.16 Jul 2 07:54:18.491128 waagent[1616]: 2024-07-02T07:54:18.491069Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jul 2 07:54:18.502988 waagent[1616]: 2024-07-02T07:54:18.502886Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.5; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 07:54:18.503376 waagent[1616]: 2024-07-02T07:54:18.503318Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:18.503562 waagent[1616]: 2024-07-02T07:54:18.503493Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:18.515309 waagent[1616]: 2024-07-02T07:54:18.515236Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 07:54:18.528589 waagent[1616]: 2024-07-02T07:54:18.528518Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 07:54:18.529528 waagent[1616]: 2024-07-02T07:54:18.529466Z INFO ExtHandler Jul 2 07:54:18.529698 waagent[1616]: 2024-07-02T07:54:18.529643Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 63a454c6-a947-4851-8cfa-e7199ba83832 eTag: 11551044705065629214 source: Fabric] Jul 2 07:54:18.530388 waagent[1616]: 2024-07-02T07:54:18.530329Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 07:54:18.531481 waagent[1616]: 2024-07-02T07:54:18.531419Z INFO ExtHandler Jul 2 07:54:18.531634 waagent[1616]: 2024-07-02T07:54:18.531580Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 07:54:18.538301 waagent[1616]: 2024-07-02T07:54:18.538248Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 07:54:18.538757 waagent[1616]: 2024-07-02T07:54:18.538706Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Jul 2 07:54:18.559484 waagent[1616]: 2024-07-02T07:54:18.559408Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Jul 2 07:54:18.624566 waagent[1616]: 2024-07-02T07:54:18.624420Z INFO ExtHandler Downloaded certificate {'thumbprint': 'FE9204E387CD9E69DA467DF210F8821ABCC54446', 'hasPrivateKey': False} Jul 2 07:54:18.625527 waagent[1616]: 2024-07-02T07:54:18.625456Z INFO ExtHandler Downloaded certificate {'thumbprint': 'F8DDC07CD4C1A7B2B106457B40CA7538E6D2A7CA', 'hasPrivateKey': True} Jul 2 07:54:18.626584 waagent[1616]: 2024-07-02T07:54:18.626493Z INFO ExtHandler Fetch goal state completed Jul 2 07:54:18.647458 waagent[1616]: 2024-07-02T07:54:18.647303Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.7 1 Nov 2022 (Library: OpenSSL 3.0.7 1 Nov 2022) Jul 2 07:54:18.658768 waagent[1616]: 2024-07-02T07:54:18.658686Z INFO ExtHandler ExtHandler WALinuxAgent-2.11.1.4 running as process 1616 Jul 2 07:54:18.662067 waagent[1616]: 2024-07-02T07:54:18.662003Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 07:54:18.663440 waagent[1616]: 2024-07-02T07:54:18.663380Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 07:54:18.668020 waagent[1616]: 2024-07-02T07:54:18.667965Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 07:54:18.668391 waagent[1616]: 2024-07-02T07:54:18.668335Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 07:54:18.676318 waagent[1616]: 2024-07-02T07:54:18.676263Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 07:54:18.676769 waagent[1616]: 2024-07-02T07:54:18.676713Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Jul 2 07:54:18.682672 waagent[1616]: 2024-07-02T07:54:18.682579Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 07:54:18.683605 waagent[1616]: 2024-07-02T07:54:18.683528Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 07:54:18.685055 waagent[1616]: 2024-07-02T07:54:18.684993Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 07:54:18.685468 waagent[1616]: 2024-07-02T07:54:18.685411Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:18.685643 waagent[1616]: 2024-07-02T07:54:18.685591Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:18.686212 waagent[1616]: 2024-07-02T07:54:18.686137Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 07:54:18.686504 waagent[1616]: 2024-07-02T07:54:18.686449Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 07:54:18.686504 waagent[1616]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 07:54:18.686504 waagent[1616]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 07:54:18.686504 waagent[1616]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 07:54:18.686504 waagent[1616]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:18.686504 waagent[1616]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:18.686504 waagent[1616]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:54:18.688709 waagent[1616]: 2024-07-02T07:54:18.688622Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 07:54:18.689477 waagent[1616]: 2024-07-02T07:54:18.689413Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:54:18.689795 waagent[1616]: 2024-07-02T07:54:18.689726Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 07:54:18.689998 waagent[1616]: 2024-07-02T07:54:18.689928Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 07:54:18.692474 waagent[1616]: 2024-07-02T07:54:18.692377Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:54:18.693558 waagent[1616]: 2024-07-02T07:54:18.693462Z INFO EnvHandler ExtHandler Configure routes Jul 2 07:54:18.694001 waagent[1616]: 2024-07-02T07:54:18.693932Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 07:54:18.694377 waagent[1616]: 2024-07-02T07:54:18.694321Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 07:54:18.694617 waagent[1616]: 2024-07-02T07:54:18.694565Z INFO EnvHandler ExtHandler Gateway:None Jul 2 07:54:18.696830 waagent[1616]: 2024-07-02T07:54:18.696684Z INFO EnvHandler ExtHandler Routes:None Jul 2 07:54:18.697766 waagent[1616]: 2024-07-02T07:54:18.697696Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 07:54:18.698512 waagent[1616]: 2024-07-02T07:54:18.698456Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 07:54:18.698512 waagent[1616]: Executing ['ip', '-a', '-o', 'link']: Jul 2 07:54:18.698512 waagent[1616]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 07:54:18.698512 waagent[1616]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:8a:54 brd ff:ff:ff:ff:ff:ff Jul 2 07:54:18.698512 waagent[1616]: 3: enP23892s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:dd:8a:54 brd ff:ff:ff:ff:ff:ff\ altname enP23892p0s2 Jul 2 07:54:18.698512 waagent[1616]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 07:54:18.698512 waagent[1616]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 07:54:18.698512 waagent[1616]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 07:54:18.698512 waagent[1616]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 07:54:18.698512 waagent[1616]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 07:54:18.698512 waagent[1616]: 2: eth0 inet6 fe80::20d:3aff:fedd:8a54/64 scope link \ valid_lft forever preferred_lft forever Jul 2 07:54:18.722108 waagent[1616]: 2024-07-02T07:54:18.722019Z INFO ExtHandler ExtHandler Downloading agent manifest Jul 2 07:54:18.760203 waagent[1616]: 2024-07-02T07:54:18.760124Z INFO ExtHandler ExtHandler Jul 2 07:54:18.760790 waagent[1616]: 2024-07-02T07:54:18.760730Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: cabe275b-a720-4652-80b1-8fac71020a5b correlation 5636eb75-ef1b-427c-9964-d5486a5a3e05 created: 2024-07-02T07:52:39.108604Z] Jul 2 07:54:18.763337 waagent[1616]: 2024-07-02T07:54:18.763282Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 07:54:18.765153 waagent[1616]: 2024-07-02T07:54:18.765101Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jul 2 07:54:18.791348 waagent[1616]: 2024-07-02T07:54:18.791255Z INFO ExtHandler ExtHandler Looking for existing remote access users. Jul 2 07:54:18.810249 waagent[1616]: 2024-07-02T07:54:18.810112Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.11.1.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D8B6DC16-1128-4093-8B52-F425CA691F95;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Jul 2 07:54:18.825679 waagent[1616]: 2024-07-02T07:54:18.825571Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jul 2 07:54:18.825679 waagent[1616]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:18.825679 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.825679 waagent[1616]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:18.825679 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.825679 waagent[1616]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Jul 2 07:54:18.825679 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.825679 waagent[1616]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:54:18.825679 waagent[1616]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:54:18.825679 waagent[1616]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:54:18.832777 waagent[1616]: 2024-07-02T07:54:18.832678Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 07:54:18.832777 waagent[1616]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:18.832777 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.832777 waagent[1616]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:54:18.832777 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.832777 waagent[1616]: Chain OUTPUT (policy ACCEPT 1 packets, 52 bytes) Jul 2 07:54:18.832777 waagent[1616]: pkts bytes target prot opt in out source destination Jul 2 07:54:18.832777 waagent[1616]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:54:18.832777 waagent[1616]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:54:18.832777 waagent[1616]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:54:18.833342 waagent[1616]: 2024-07-02T07:54:18.833288Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 07:54:26.253067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:54:26.253398 systemd[1]: Stopped kubelet.service. Jul 2 07:54:26.255455 systemd[1]: Starting kubelet.service... Jul 2 07:54:26.336499 systemd[1]: Started kubelet.service. Jul 2 07:54:26.877644 kubelet[1668]: E0702 07:54:26.877582 1668 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:26.879668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:26.879832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:37.003316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:54:37.003660 systemd[1]: Stopped kubelet.service. Jul 2 07:54:37.005880 systemd[1]: Starting kubelet.service... Jul 2 07:54:37.086822 systemd[1]: Started kubelet.service. Jul 2 07:54:37.129360 kubelet[1678]: E0702 07:54:37.129305 1678 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:37.131032 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:37.131195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:40.238251 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 07:54:46.230581 systemd[1]: Created slice system-sshd.slice. Jul 2 07:54:46.232916 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:49388.service. Jul 2 07:54:47.146928 sshd[1686]: Accepted publickey for core from 10.200.16.10 port 49388 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:47.148722 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:47.149811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 07:54:47.150130 systemd[1]: Stopped kubelet.service. Jul 2 07:54:47.152144 systemd[1]: Starting kubelet.service... Jul 2 07:54:47.155984 systemd-logind[1395]: New session 3 of user core. Jul 2 07:54:47.157912 systemd[1]: Started session-3.scope. Jul 2 07:54:47.411484 systemd[1]: Started kubelet.service. Jul 2 07:54:47.707233 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:49396.service. Jul 2 07:54:47.810465 kubelet[1693]: E0702 07:54:47.810402 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:47.812240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:47.812404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:48.745308 sshd[1702]: Accepted publickey for core from 10.200.16.10 port 49396 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:48.745702 update_engine[1398]: I0702 07:54:48.494751 1398 update_attempter.cc:509] Updating boot flags... Jul 2 07:54:48.792690 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:48.800629 systemd-logind[1395]: New session 4 of user core. Jul 2 07:54:48.801482 systemd[1]: Started session-4.scope. Jul 2 07:54:49.166750 sshd[1702]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:49.170166 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:49396.service: Deactivated successfully. Jul 2 07:54:49.171217 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:54:49.171882 systemd-logind[1395]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:54:49.172661 systemd-logind[1395]: Removed session 4. Jul 2 07:54:49.275522 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:45908.service. Jul 2 07:54:49.949340 sshd[1774]: Accepted publickey for core from 10.200.16.10 port 45908 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:49.951081 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:49.957079 systemd[1]: Started session-5.scope. Jul 2 07:54:49.957672 systemd-logind[1395]: New session 5 of user core. Jul 2 07:54:50.403541 sshd[1774]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:50.407064 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:45908.service: Deactivated successfully. Jul 2 07:54:50.408203 systemd-logind[1395]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:54:50.408304 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:54:50.409408 systemd-logind[1395]: Removed session 5. Jul 2 07:54:50.514984 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:45924.service. Jul 2 07:54:51.160849 sshd[1780]: Accepted publickey for core from 10.200.16.10 port 45924 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:51.162640 sshd[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:51.168439 systemd[1]: Started session-6.scope. Jul 2 07:54:51.169316 systemd-logind[1395]: New session 6 of user core. Jul 2 07:54:51.620945 sshd[1780]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:51.624389 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:45924.service: Deactivated successfully. Jul 2 07:54:51.625394 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:54:51.626177 systemd-logind[1395]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:54:51.627101 systemd-logind[1395]: Removed session 6. Jul 2 07:54:51.728336 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:45938.service. Jul 2 07:54:52.371446 sshd[1786]: Accepted publickey for core from 10.200.16.10 port 45938 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:54:52.373233 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:52.378582 systemd[1]: Started session-7.scope. Jul 2 07:54:52.379334 systemd-logind[1395]: New session 7 of user core. Jul 2 07:54:52.984368 sudo[1792]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:54:52.984762 sudo[1792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:54:53.009555 systemd[1]: Starting docker.service... Jul 2 07:54:53.061124 env[1802]: time="2024-07-02T07:54:53.061059086Z" level=info msg="Starting up" Jul 2 07:54:53.062394 env[1802]: time="2024-07-02T07:54:53.062361978Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:54:53.062501 env[1802]: time="2024-07-02T07:54:53.062490578Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:54:53.062615 env[1802]: time="2024-07-02T07:54:53.062593477Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:54:53.062698 env[1802]: time="2024-07-02T07:54:53.062686277Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:54:53.064574 env[1802]: time="2024-07-02T07:54:53.064532366Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:54:53.064574 env[1802]: time="2024-07-02T07:54:53.064569266Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:54:53.064705 env[1802]: time="2024-07-02T07:54:53.064586766Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:54:53.064705 env[1802]: time="2024-07-02T07:54:53.064599466Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:54:53.071164 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1775708784-merged.mount: Deactivated successfully. Jul 2 07:54:53.122396 env[1802]: time="2024-07-02T07:54:53.122346448Z" level=info msg="Loading containers: start." Jul 2 07:54:53.311571 kernel: Initializing XFRM netlink socket Jul 2 07:54:53.341047 env[1802]: time="2024-07-02T07:54:53.341000142Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:54:53.577956 systemd-networkd[1558]: docker0: Link UP Jul 2 07:54:53.596411 env[1802]: time="2024-07-02T07:54:53.596370534Z" level=info msg="Loading containers: done." Jul 2 07:54:53.608407 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck561174648-merged.mount: Deactivated successfully. Jul 2 07:54:53.642122 env[1802]: time="2024-07-02T07:54:53.642060182Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:54:53.642344 env[1802]: time="2024-07-02T07:54:53.642306881Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:54:53.642462 env[1802]: time="2024-07-02T07:54:53.642438780Z" level=info msg="Daemon has completed initialization" Jul 2 07:54:53.681942 systemd[1]: Started docker.service. Jul 2 07:54:53.692563 env[1802]: time="2024-07-02T07:54:53.692402904Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:54:58.003028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 07:54:58.003357 systemd[1]: Stopped kubelet.service. Jul 2 07:54:58.005887 systemd[1]: Starting kubelet.service... Jul 2 07:54:59.619606 systemd[1]: Started kubelet.service. Jul 2 07:54:59.662112 kubelet[1923]: E0702 07:54:59.662063 1923 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:54:59.663860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:54:59.663978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:54:59.706879 env[1407]: time="2024-07-02T07:54:59.706803844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:55:00.562203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515345314.mount: Deactivated successfully. Jul 2 07:55:03.314176 env[1407]: time="2024-07-02T07:55:03.314106545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:03.321882 env[1407]: time="2024-07-02T07:55:03.321829068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:03.332242 env[1407]: time="2024-07-02T07:55:03.332200522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:03.337353 env[1407]: time="2024-07-02T07:55:03.337317609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:03.338196 env[1407]: time="2024-07-02T07:55:03.338138994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:55:03.348879 env[1407]: time="2024-07-02T07:55:03.348845901Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:55:06.016194 env[1407]: time="2024-07-02T07:55:06.016128334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:06.023082 env[1407]: time="2024-07-02T07:55:06.023038248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:06.027060 env[1407]: time="2024-07-02T07:55:06.027022737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:06.033144 env[1407]: time="2024-07-02T07:55:06.033113056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:06.033776 env[1407]: time="2024-07-02T07:55:06.033744275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:55:06.045038 env[1407]: time="2024-07-02T07:55:06.045005830Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:55:07.564464 env[1407]: time="2024-07-02T07:55:07.564400009Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.569924 env[1407]: time="2024-07-02T07:55:07.569881025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.574449 env[1407]: time="2024-07-02T07:55:07.574417059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.578623 env[1407]: time="2024-07-02T07:55:07.578591338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:07.579255 env[1407]: time="2024-07-02T07:55:07.579220859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:55:07.589576 env[1407]: time="2024-07-02T07:55:07.589531873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:55:09.175799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301487579.mount: Deactivated successfully. Jul 2 07:55:09.733707 env[1407]: time="2024-07-02T07:55:09.733642722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:09.743350 env[1407]: time="2024-07-02T07:55:09.743299381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:09.748179 env[1407]: time="2024-07-02T07:55:09.748125511Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:09.751954 env[1407]: time="2024-07-02T07:55:09.751920463Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:09.752392 env[1407]: time="2024-07-02T07:55:09.752361211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:55:09.753038 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 07:55:09.753312 systemd[1]: Stopped kubelet.service. Jul 2 07:55:09.755523 systemd[1]: Starting kubelet.service... Jul 2 07:55:09.769065 env[1407]: time="2024-07-02T07:55:09.769033042Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:55:09.842248 systemd[1]: Started kubelet.service. Jul 2 07:55:10.349886 kubelet[1957]: E0702 07:55:10.349812 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:55:10.351793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:55:10.351972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:55:11.808228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351200184.mount: Deactivated successfully. Jul 2 07:55:11.831525 env[1407]: time="2024-07-02T07:55:11.831470923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:11.841212 env[1407]: time="2024-07-02T07:55:11.841157840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:11.846800 env[1407]: time="2024-07-02T07:55:11.846755414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:11.851805 env[1407]: time="2024-07-02T07:55:11.851770053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:11.852209 env[1407]: time="2024-07-02T07:55:11.852176507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:55:11.862325 env[1407]: time="2024-07-02T07:55:11.862295476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:55:12.420982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835216447.mount: Deactivated successfully. Jul 2 07:55:16.060674 env[1407]: time="2024-07-02T07:55:16.060605207Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:16.066809 env[1407]: time="2024-07-02T07:55:16.066764104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:16.070081 env[1407]: time="2024-07-02T07:55:16.070045283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:16.073254 env[1407]: time="2024-07-02T07:55:16.073219572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:16.073987 env[1407]: time="2024-07-02T07:55:16.073953401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:55:16.084175 env[1407]: time="2024-07-02T07:55:16.084144904Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:55:16.595142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4218964443.mount: Deactivated successfully. Jul 2 07:55:17.457852 env[1407]: time="2024-07-02T07:55:17.457787679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:17.467834 env[1407]: time="2024-07-02T07:55:17.467784726Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:17.473250 env[1407]: time="2024-07-02T07:55:17.473212009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:17.479366 env[1407]: time="2024-07-02T07:55:17.479327227Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:17.479939 env[1407]: time="2024-07-02T07:55:17.479904172Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:55:20.172339 systemd[1]: Stopped kubelet.service. Jul 2 07:55:20.175959 systemd[1]: Starting kubelet.service... Jul 2 07:55:20.201126 systemd[1]: Reloading. Jul 2 07:55:20.292055 /usr/lib/systemd/system-generators/torcx-generator[2060]: time="2024-07-02T07:55:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:20.292097 /usr/lib/systemd/system-generators/torcx-generator[2060]: time="2024-07-02T07:55:20Z" level=info msg="torcx already run" Jul 2 07:55:20.384856 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:20.384876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:20.405009 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:20.598992 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 07:55:20.599129 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 07:55:20.599616 systemd[1]: Stopped kubelet.service. Jul 2 07:55:20.602268 systemd[1]: Starting kubelet.service... Jul 2 07:55:21.762098 systemd[1]: Started kubelet.service. Jul 2 07:55:21.806480 kubelet[2127]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:21.806872 kubelet[2127]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:55:21.806872 kubelet[2127]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:21.806872 kubelet[2127]: I0702 07:55:21.806646 2127 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:55:22.529361 kubelet[2127]: I0702 07:55:22.529317 2127 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:55:22.529361 kubelet[2127]: I0702 07:55:22.529348 2127 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:55:23.292220 kubelet[2127]: I0702 07:55:22.529631 2127 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:55:23.354171 kubelet[2127]: E0702 07:55:23.354130 2127 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.354367 kubelet[2127]: I0702 07:55:23.354209 2127 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:23.363685 kubelet[2127]: I0702 07:55:23.363658 2127 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:55:23.364960 kubelet[2127]: I0702 07:55:23.364926 2127 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:55:23.365184 kubelet[2127]: I0702 07:55:23.365145 2127 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:55:23.365352 kubelet[2127]: I0702 07:55:23.365193 2127 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:55:23.365352 kubelet[2127]: I0702 07:55:23.365206 2127 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:55:23.366028 kubelet[2127]: I0702 07:55:23.366000 2127 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:23.367308 kubelet[2127]: I0702 07:55:23.367287 2127 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:55:23.367415 kubelet[2127]: I0702 07:55:23.367312 2127 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:55:23.367415 kubelet[2127]: I0702 07:55:23.367344 2127 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:55:23.367415 kubelet[2127]: I0702 07:55:23.367362 2127 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:55:23.371356 kubelet[2127]: W0702 07:55:23.371302 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-61dd50c322&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.371506 kubelet[2127]: E0702 07:55:23.371493 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-61dd50c322&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.372352 kubelet[2127]: I0702 07:55:23.372334 2127 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:55:23.380101 kubelet[2127]: W0702 07:55:23.380080 2127 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:55:23.383743 kubelet[2127]: I0702 07:55:23.383723 2127 server.go:1232] "Started kubelet" Jul 2 07:55:23.389616 kubelet[2127]: W0702 07:55:23.389570 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.389776 kubelet[2127]: E0702 07:55:23.389763 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.390080 kubelet[2127]: E0702 07:55:23.389964 2127 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-61dd50c322.17de563bff0512f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-61dd50c322", UID:"ci-3510.3.5-a-61dd50c322", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-61dd50c322"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 55, 23, 383689972, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 55, 23, 383689972, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-61dd50c322"}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:55:23.390377 kubelet[2127]: E0702 07:55:23.390360 2127 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:55:23.390486 kubelet[2127]: E0702 07:55:23.390475 2127 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:55:23.390820 kubelet[2127]: I0702 07:55:23.390806 2127 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:55:23.391303 kubelet[2127]: I0702 07:55:23.391290 2127 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:55:23.391427 kubelet[2127]: I0702 07:55:23.391419 2127 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:55:23.392250 kubelet[2127]: I0702 07:55:23.392233 2127 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:55:23.395825 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:55:23.396257 kubelet[2127]: I0702 07:55:23.396236 2127 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:55:23.398537 kubelet[2127]: E0702 07:55:23.398519 2127 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.5-a-61dd50c322\" not found" Jul 2 07:55:23.398710 kubelet[2127]: I0702 07:55:23.398697 2127 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:55:23.398920 kubelet[2127]: I0702 07:55:23.398904 2127 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:55:23.399086 kubelet[2127]: I0702 07:55:23.399066 2127 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:55:23.399572 kubelet[2127]: W0702 07:55:23.399513 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.399708 kubelet[2127]: E0702 07:55:23.399696 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.401411 kubelet[2127]: E0702 07:55:23.401393 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Jul 2 07:55:23.447042 kubelet[2127]: I0702 07:55:23.447004 2127 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:55:23.447042 kubelet[2127]: I0702 07:55:23.447026 2127 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:55:23.447042 kubelet[2127]: I0702 07:55:23.447045 2127 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:23.456976 kubelet[2127]: I0702 07:55:23.456940 2127 policy_none.go:49] "None policy: Start" Jul 2 07:55:23.457798 kubelet[2127]: I0702 07:55:23.457769 2127 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:55:23.457925 kubelet[2127]: I0702 07:55:23.457797 2127 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:55:23.471110 systemd[1]: Created slice kubepods.slice. Jul 2 07:55:23.475759 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:55:23.478853 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:55:23.486110 kubelet[2127]: I0702 07:55:23.486090 2127 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:55:23.486515 kubelet[2127]: I0702 07:55:23.486501 2127 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:55:23.488151 kubelet[2127]: E0702 07:55:23.488137 2127 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.5-a-61dd50c322\" not found" Jul 2 07:55:23.493329 kubelet[2127]: I0702 07:55:23.493303 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:55:23.494755 kubelet[2127]: I0702 07:55:23.494729 2127 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:55:23.494755 kubelet[2127]: I0702 07:55:23.494753 2127 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:55:23.494891 kubelet[2127]: I0702 07:55:23.494774 2127 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:55:23.494891 kubelet[2127]: E0702 07:55:23.494822 2127 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 07:55:23.499033 kubelet[2127]: W0702 07:55:23.498986 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.499118 kubelet[2127]: E0702 07:55:23.499045 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:23.500396 kubelet[2127]: I0702 07:55:23.500376 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.500689 kubelet[2127]: E0702 07:55:23.500671 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.597694 kubelet[2127]: I0702 07:55:23.595124 2127 topology_manager.go:215] "Topology Admit Handler" podUID="41212e9ae6af1b95f30138e9b506cc1e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.597884 kubelet[2127]: I0702 07:55:23.597848 2127 topology_manager.go:215] "Topology Admit Handler" podUID="d3c6599c950bf86a73161e97e720d54c" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.599799 kubelet[2127]: I0702 07:55:23.599776 2127 topology_manager.go:215] "Topology Admit Handler" podUID="ef6916c38195a09be4e70075b7e76be4" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.600111 kubelet[2127]: I0702 07:55:23.600085 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.600248 kubelet[2127]: I0702 07:55:23.600225 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.600330 kubelet[2127]: I0702 07:55:23.600276 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.602219 kubelet[2127]: E0702 07:55:23.601943 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Jul 2 07:55:23.606144 systemd[1]: Created slice kubepods-burstable-pod41212e9ae6af1b95f30138e9b506cc1e.slice. Jul 2 07:55:23.613685 systemd[1]: Created slice kubepods-burstable-podef6916c38195a09be4e70075b7e76be4.slice. Jul 2 07:55:23.618207 systemd[1]: Created slice kubepods-burstable-podd3c6599c950bf86a73161e97e720d54c.slice. Jul 2 07:55:23.700508 kubelet[2127]: I0702 07:55:23.700444 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.700508 kubelet[2127]: I0702 07:55:23.700519 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.700827 kubelet[2127]: I0702 07:55:23.700652 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.700827 kubelet[2127]: I0702 07:55:23.700689 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.700827 kubelet[2127]: I0702 07:55:23.700722 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.700827 kubelet[2127]: I0702 07:55:23.700754 2127 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6916c38195a09be4e70075b7e76be4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-61dd50c322\" (UID: \"ef6916c38195a09be4e70075b7e76be4\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.702758 kubelet[2127]: I0702 07:55:23.702726 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.703100 kubelet[2127]: E0702 07:55:23.703079 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:23.913248 env[1407]: time="2024-07-02T07:55:23.913093073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-61dd50c322,Uid:41212e9ae6af1b95f30138e9b506cc1e,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:23.917612 env[1407]: time="2024-07-02T07:55:23.917570507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-61dd50c322,Uid:ef6916c38195a09be4e70075b7e76be4,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:23.922362 env[1407]: time="2024-07-02T07:55:23.922324619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-61dd50c322,Uid:d3c6599c950bf86a73161e97e720d54c,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:24.003256 kubelet[2127]: E0702 07:55:24.003217 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Jul 2 07:55:24.105080 kubelet[2127]: I0702 07:55:24.105051 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:24.105603 kubelet[2127]: E0702 07:55:24.105578 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:24.506513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417320059.mount: Deactivated successfully. Jul 2 07:55:24.537713 env[1407]: time="2024-07-02T07:55:24.537653200Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.541210 env[1407]: time="2024-07-02T07:55:24.541169320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.595388 kubelet[2127]: W0702 07:55:24.595313 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-61dd50c322&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.595388 kubelet[2127]: E0702 07:55:24.595391 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.5-a-61dd50c322&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.614283 kubelet[2127]: W0702 07:55:24.614215 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.614283 kubelet[2127]: E0702 07:55:24.614288 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.685699 env[1407]: time="2024-07-02T07:55:24.685624527Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.689370 env[1407]: time="2024-07-02T07:55:24.689310934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.696504 env[1407]: time="2024-07-02T07:55:24.696457265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.701822 env[1407]: time="2024-07-02T07:55:24.701785141Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.704417 env[1407]: time="2024-07-02T07:55:24.704385734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.707967 env[1407]: time="2024-07-02T07:55:24.707936252Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.711135 env[1407]: time="2024-07-02T07:55:24.711104300Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.712384 kubelet[2127]: W0702 07:55:24.712318 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.712477 kubelet[2127]: E0702 07:55:24.712392 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.714044 env[1407]: time="2024-07-02T07:55:24.714009369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.720243 env[1407]: time="2024-07-02T07:55:24.720213675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.734343 env[1407]: time="2024-07-02T07:55:24.734303954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:24.760087 kubelet[2127]: W0702 07:55:24.759962 2127 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.760087 kubelet[2127]: E0702 07:55:24.760025 2127 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Jul 2 07:55:24.788718 env[1407]: time="2024-07-02T07:55:24.788642131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:24.788945 env[1407]: time="2024-07-02T07:55:24.788682827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:24.788945 env[1407]: time="2024-07-02T07:55:24.788696426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:24.788945 env[1407]: time="2024-07-02T07:55:24.788856014Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6decc463702cff6169682730ac3ec3921d5865d3f43333ac21990f97b240f90f pid=2165 runtime=io.containerd.runc.v2 Jul 2 07:55:24.805352 kubelet[2127]: E0702 07:55:24.805313 2127 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Jul 2 07:55:24.822371 systemd[1]: Started cri-containerd-6decc463702cff6169682730ac3ec3921d5865d3f43333ac21990f97b240f90f.scope. Jul 2 07:55:24.835492 env[1407]: time="2024-07-02T07:55:24.833000201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:24.835492 env[1407]: time="2024-07-02T07:55:24.833042798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:24.835492 env[1407]: time="2024-07-02T07:55:24.833058397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:24.838018 env[1407]: time="2024-07-02T07:55:24.837969506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf pid=2199 runtime=io.containerd.runc.v2 Jul 2 07:55:24.846372 env[1407]: time="2024-07-02T07:55:24.846290944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:24.846372 env[1407]: time="2024-07-02T07:55:24.846343140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:24.846631 env[1407]: time="2024-07-02T07:55:24.846357739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:24.846906 env[1407]: time="2024-07-02T07:55:24.846863798Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913 pid=2217 runtime=io.containerd.runc.v2 Jul 2 07:55:24.857091 systemd[1]: Started cri-containerd-2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf.scope. Jul 2 07:55:24.883375 systemd[1]: Started cri-containerd-a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913.scope. Jul 2 07:55:24.910269 kubelet[2127]: I0702 07:55:24.909889 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:24.910269 kubelet[2127]: E0702 07:55:24.910237 2127 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:24.914498 env[1407]: time="2024-07-02T07:55:24.914451221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.5-a-61dd50c322,Uid:41212e9ae6af1b95f30138e9b506cc1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"6decc463702cff6169682730ac3ec3921d5865d3f43333ac21990f97b240f90f\"" Jul 2 07:55:24.922451 env[1407]: time="2024-07-02T07:55:24.922411088Z" level=info msg="CreateContainer within sandbox \"6decc463702cff6169682730ac3ec3921d5865d3f43333ac21990f97b240f90f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:55:24.954146 env[1407]: time="2024-07-02T07:55:24.954077668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.5-a-61dd50c322,Uid:d3c6599c950bf86a73161e97e720d54c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf\"" Jul 2 07:55:24.957439 env[1407]: time="2024-07-02T07:55:24.957402304Z" level=info msg="CreateContainer within sandbox \"2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:55:24.968716 env[1407]: time="2024-07-02T07:55:24.968670507Z" level=info msg="CreateContainer within sandbox \"6decc463702cff6169682730ac3ec3921d5865d3f43333ac21990f97b240f90f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8fca34a64b69c3a63667385421148ffa877805a116ebab8c349dc3cbb0277530\"" Jul 2 07:55:24.969406 env[1407]: time="2024-07-02T07:55:24.969372451Z" level=info msg="StartContainer for \"8fca34a64b69c3a63667385421148ffa877805a116ebab8c349dc3cbb0277530\"" Jul 2 07:55:24.972816 env[1407]: time="2024-07-02T07:55:24.972773281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.5-a-61dd50c322,Uid:ef6916c38195a09be4e70075b7e76be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913\"" Jul 2 07:55:24.976725 env[1407]: time="2024-07-02T07:55:24.976694669Z" level=info msg="CreateContainer within sandbox \"a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:55:24.991460 systemd[1]: Started cri-containerd-8fca34a64b69c3a63667385421148ffa877805a116ebab8c349dc3cbb0277530.scope. Jul 2 07:55:25.030774 env[1407]: time="2024-07-02T07:55:25.030661232Z" level=info msg="CreateContainer within sandbox \"a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91\"" Jul 2 07:55:25.032287 env[1407]: time="2024-07-02T07:55:25.032241210Z" level=info msg="StartContainer for \"1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91\"" Jul 2 07:55:25.033674 env[1407]: time="2024-07-02T07:55:25.033627002Z" level=info msg="CreateContainer within sandbox \"2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501\"" Jul 2 07:55:25.034097 env[1407]: time="2024-07-02T07:55:25.034068568Z" level=info msg="StartContainer for \"077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501\"" Jul 2 07:55:25.053473 env[1407]: time="2024-07-02T07:55:25.053400368Z" level=info msg="StartContainer for \"8fca34a64b69c3a63667385421148ffa877805a116ebab8c349dc3cbb0277530\" returns successfully" Jul 2 07:55:25.071348 systemd[1]: Started cri-containerd-1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91.scope. Jul 2 07:55:25.082096 systemd[1]: Started cri-containerd-077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501.scope. Jul 2 07:55:25.157643 env[1407]: time="2024-07-02T07:55:25.157538988Z" level=info msg="StartContainer for \"077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501\" returns successfully" Jul 2 07:55:25.237212 env[1407]: time="2024-07-02T07:55:25.237155111Z" level=info msg="StartContainer for \"1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91\" returns successfully" Jul 2 07:55:26.513375 kubelet[2127]: I0702 07:55:26.513338 2127 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:27.950491 kubelet[2127]: E0702 07:55:27.950444 2127 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.5-a-61dd50c322\" not found" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:28.142367 kubelet[2127]: E0702 07:55:28.142264 2127 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.5-a-61dd50c322.17de563bff0512f4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.5-a-61dd50c322", UID:"ci-3510.3.5-a-61dd50c322", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-61dd50c322"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 55, 23, 383689972, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 55, 23, 383689972, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-61dd50c322"}': 'namespaces "default" not found' (will not retry!) Jul 2 07:55:28.144695 kubelet[2127]: I0702 07:55:28.144658 2127 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:28.371729 kubelet[2127]: I0702 07:55:28.371687 2127 apiserver.go:52] "Watching apiserver" Jul 2 07:55:28.399827 kubelet[2127]: I0702 07:55:28.399766 2127 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:55:28.409724 kubelet[2127]: E0702 07:55:28.409691 2127 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:30.999820 kubelet[2127]: W0702 07:55:30.999789 2127 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:32.776283 systemd[1]: Reloading. Jul 2 07:55:32.856739 /usr/lib/systemd/system-generators/torcx-generator[2416]: time="2024-07-02T07:55:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:32.856777 /usr/lib/systemd/system-generators/torcx-generator[2416]: time="2024-07-02T07:55:32Z" level=info msg="torcx already run" Jul 2 07:55:32.959474 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:32.959494 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:32.978419 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:33.093054 kubelet[2127]: I0702 07:55:33.092815 2127 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:33.093717 systemd[1]: Stopping kubelet.service... Jul 2 07:55:33.106029 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:55:33.106316 systemd[1]: Stopped kubelet.service. Jul 2 07:55:33.108404 systemd[1]: Starting kubelet.service... Jul 2 07:55:35.920131 systemd[1]: Started kubelet.service. Jul 2 07:55:35.967470 kubelet[2483]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:35.967859 kubelet[2483]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:55:35.967904 kubelet[2483]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:55:35.968042 kubelet[2483]: I0702 07:55:35.968002 2483 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:55:35.972930 kubelet[2483]: I0702 07:55:35.972888 2483 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:55:35.972930 kubelet[2483]: I0702 07:55:35.972919 2483 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:55:35.973590 kubelet[2483]: I0702 07:55:35.973567 2483 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:55:35.978481 kubelet[2483]: I0702 07:55:35.978459 2483 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:55:35.979567 kubelet[2483]: I0702 07:55:35.979533 2483 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:55:35.985502 kubelet[2483]: I0702 07:55:35.985482 2483 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:55:35.985734 kubelet[2483]: I0702 07:55:35.985711 2483 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:55:35.985919 kubelet[2483]: I0702 07:55:35.985890 2483 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:55:35.986063 kubelet[2483]: I0702 07:55:35.985928 2483 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:55:35.986063 kubelet[2483]: I0702 07:55:35.985941 2483 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:55:35.986063 kubelet[2483]: I0702 07:55:35.985988 2483 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:35.986324 kubelet[2483]: I0702 07:55:35.986124 2483 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:55:35.986738 kubelet[2483]: I0702 07:55:35.986148 2483 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:55:35.989601 kubelet[2483]: I0702 07:55:35.987668 2483 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:55:35.989724 kubelet[2483]: I0702 07:55:35.989713 2483 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:55:35.991672 kubelet[2483]: I0702 07:55:35.991657 2483 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:55:35.992496 kubelet[2483]: I0702 07:55:35.992478 2483 server.go:1232] "Started kubelet" Jul 2 07:55:36.002634 kubelet[2483]: I0702 07:55:35.999206 2483 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:55:36.005562 kubelet[2483]: I0702 07:55:36.005527 2483 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.006899 2483 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.007340 2483 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.009209 2483 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.009357 2483 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.011290 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.012471 2483 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.012487 2483 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.012509 2483 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:55:38.139941 kubelet[2483]: E0702 07:55:36.012612 2483 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.013506 2483 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.013729 2483 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:55:38.139941 kubelet[2483]: E0702 07:55:36.028335 2483 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:55:38.139941 kubelet[2483]: E0702 07:55:36.028364 2483 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.076820 2483 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.076840 2483 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.076857 2483 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:55:38.139941 kubelet[2483]: I0702 07:55:36.077036 2483 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.077055 2483 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.077061 2483 policy_none.go:49] "None policy: Start" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.077642 2483 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.077662 2483 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.077813 2483 state_mem.go:75] "Updated machine memory state" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.081841 2483 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.083031 2483 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.112968 2483 topology_manager.go:215] "Topology Admit Handler" podUID="41212e9ae6af1b95f30138e9b506cc1e" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.113094 2483 topology_manager.go:215] "Topology Admit Handler" podUID="d3c6599c950bf86a73161e97e720d54c" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.113139 2483 topology_manager.go:215] "Topology Admit Handler" podUID="ef6916c38195a09be4e70075b7e76be4" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141034 kubelet[2483]: I0702 07:55:36.119122 2483 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141034 kubelet[2483]: W0702 07:55:36.124156 2483 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:38.141034 kubelet[2483]: W0702 07:55:36.124356 2483 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:38.141034 kubelet[2483]: W0702 07:55:36.133148 2483 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:38.141607 kubelet[2483]: E0702 07:55:36.133233 2483 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.138807 2483 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.138882 2483 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.310448 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.310558 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.310607 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef6916c38195a09be4e70075b7e76be4-kubeconfig\") pod \"kube-scheduler-ci-3510.3.5-a-61dd50c322\" (UID: \"ef6916c38195a09be4e70075b7e76be4\") " pod="kube-system/kube-scheduler-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141607 kubelet[2483]: I0702 07:55:36.310646 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-ca-certs\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.310681 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.310708 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.310740 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.310762 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41212e9ae6af1b95f30138e9b506cc1e-k8s-certs\") pod \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" (UID: \"41212e9ae6af1b95f30138e9b506cc1e\") " pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.310782 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d3c6599c950bf86a73161e97e720d54c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.5-a-61dd50c322\" (UID: \"d3c6599c950bf86a73161e97e720d54c\") " pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.141892 kubelet[2483]: I0702 07:55:36.990774 2483 apiserver.go:52] "Watching apiserver" Jul 2 07:55:38.142145 kubelet[2483]: I0702 07:55:37.009901 2483 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:55:38.142145 kubelet[2483]: W0702 07:55:37.059922 2483 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:55:38.142145 kubelet[2483]: E0702 07:55:37.060038 2483 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.5-a-61dd50c322\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" Jul 2 07:55:38.142145 kubelet[2483]: I0702 07:55:37.070107 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.5-a-61dd50c322" podStartSLOduration=1.070052011 podCreationTimestamp="2024-07-02 07:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:37.069671633 +0000 UTC m=+1.144016365" watchObservedRunningTime="2024-07-02 07:55:37.070052011 +0000 UTC m=+1.144396743" Jul 2 07:55:38.142145 kubelet[2483]: I0702 07:55:37.084514 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.5-a-61dd50c322" podStartSLOduration=1.084458073 podCreationTimestamp="2024-07-02 07:55:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:37.077909454 +0000 UTC m=+1.152254186" watchObservedRunningTime="2024-07-02 07:55:37.084458073 +0000 UTC m=+1.158802905" Jul 2 07:55:38.142145 kubelet[2483]: I0702 07:55:37.084665 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" podStartSLOduration=7.084616964 podCreationTimestamp="2024-07-02 07:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:37.084010199 +0000 UTC m=+1.158354931" watchObservedRunningTime="2024-07-02 07:55:37.084616964 +0000 UTC m=+1.158961696" Jul 2 07:55:38.162508 sudo[2513]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:55:38.162829 sudo[2513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:55:38.684543 sudo[2513]: pam_unix(sudo:session): session closed for user root Jul 2 07:55:40.553717 sudo[1792]: pam_unix(sudo:session): session closed for user root Jul 2 07:55:40.656922 sshd[1786]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:40.660485 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:45938.service: Deactivated successfully. Jul 2 07:55:40.661711 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:55:40.661973 systemd[1]: session-7.scope: Consumed 4.312s CPU time. Jul 2 07:55:40.662697 systemd-logind[1395]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:55:40.663766 systemd-logind[1395]: Removed session 7. Jul 2 07:55:44.814522 kubelet[2483]: I0702 07:55:44.814485 2483 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:55:44.815319 env[1407]: time="2024-07-02T07:55:44.815271382Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:55:44.815731 kubelet[2483]: I0702 07:55:44.815501 2483 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:55:45.400638 kubelet[2483]: I0702 07:55:45.400593 2483 topology_manager.go:215] "Topology Admit Handler" podUID="d1bfff31-f462-4ddb-a583-9e723763553b" podNamespace="kube-system" podName="kube-proxy-7sgr7" Jul 2 07:55:45.420506 systemd[1]: Created slice kubepods-besteffort-podd1bfff31_f462_4ddb_a583_9e723763553b.slice. Jul 2 07:55:45.427597 kubelet[2483]: I0702 07:55:45.427570 2483 topology_manager.go:215] "Topology Admit Handler" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" podNamespace="kube-system" podName="cilium-r49lb" Jul 2 07:55:45.434128 systemd[1]: Created slice kubepods-burstable-podb00c368e_cad5_4e58_9d95_3e2ae0bef877.slice. Jul 2 07:55:45.468966 kubelet[2483]: I0702 07:55:45.468931 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hostproc\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469155 kubelet[2483]: I0702 07:55:45.468996 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-bpf-maps\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469155 kubelet[2483]: I0702 07:55:45.469025 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cni-path\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469155 kubelet[2483]: I0702 07:55:45.469065 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-etc-cni-netd\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469155 kubelet[2483]: I0702 07:55:45.469091 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-lib-modules\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469155 kubelet[2483]: I0702 07:55:45.469131 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1bfff31-f462-4ddb-a583-9e723763553b-kube-proxy\") pod \"kube-proxy-7sgr7\" (UID: \"d1bfff31-f462-4ddb-a583-9e723763553b\") " pod="kube-system/kube-proxy-7sgr7" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469162 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1bfff31-f462-4ddb-a583-9e723763553b-xtables-lock\") pod \"kube-proxy-7sgr7\" (UID: \"d1bfff31-f462-4ddb-a583-9e723763553b\") " pod="kube-system/kube-proxy-7sgr7" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469205 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmtlc\" (UniqueName: \"kubernetes.io/projected/d1bfff31-f462-4ddb-a583-9e723763553b-kube-api-access-mmtlc\") pod \"kube-proxy-7sgr7\" (UID: \"d1bfff31-f462-4ddb-a583-9e723763553b\") " pod="kube-system/kube-proxy-7sgr7" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469240 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1bfff31-f462-4ddb-a583-9e723763553b-lib-modules\") pod \"kube-proxy-7sgr7\" (UID: \"d1bfff31-f462-4ddb-a583-9e723763553b\") " pod="kube-system/kube-proxy-7sgr7" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469281 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-run\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469319 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-cgroup\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469389 kubelet[2483]: I0702 07:55:45.469362 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-xtables-lock\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469662 kubelet[2483]: I0702 07:55:45.469392 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-kernel\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469662 kubelet[2483]: I0702 07:55:45.469424 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-net\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469662 kubelet[2483]: I0702 07:55:45.469476 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hubble-tls\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469662 kubelet[2483]: I0702 07:55:45.469522 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b00c368e-cad5-4e58-9d95-3e2ae0bef877-clustermesh-secrets\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469662 kubelet[2483]: I0702 07:55:45.469566 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-config-path\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.469870 kubelet[2483]: I0702 07:55:45.469600 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfmlh\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh\") pod \"cilium-r49lb\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " pod="kube-system/cilium-r49lb" Jul 2 07:55:45.597622 kubelet[2483]: E0702 07:55:45.597583 2483 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 07:55:45.598327 kubelet[2483]: E0702 07:55:45.598315 2483 projected.go:198] Error preparing data for projected volume kube-api-access-vfmlh for pod kube-system/cilium-r49lb: configmap "kube-root-ca.crt" not found Jul 2 07:55:45.598503 kubelet[2483]: E0702 07:55:45.598263 2483 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 07:55:45.598610 kubelet[2483]: E0702 07:55:45.598598 2483 projected.go:198] Error preparing data for projected volume kube-api-access-mmtlc for pod kube-system/kube-proxy-7sgr7: configmap "kube-root-ca.crt" not found Jul 2 07:55:45.598746 kubelet[2483]: E0702 07:55:45.598730 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh podName:b00c368e-cad5-4e58-9d95-3e2ae0bef877 nodeName:}" failed. No retries permitted until 2024-07-02 07:55:46.098477151 +0000 UTC m=+10.172821983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vfmlh" (UniqueName: "kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh") pod "cilium-r49lb" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877") : configmap "kube-root-ca.crt" not found Jul 2 07:55:45.599131 kubelet[2483]: E0702 07:55:45.599118 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1bfff31-f462-4ddb-a583-9e723763553b-kube-api-access-mmtlc podName:d1bfff31-f462-4ddb-a583-9e723763553b nodeName:}" failed. No retries permitted until 2024-07-02 07:55:46.09910022 +0000 UTC m=+10.173445052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mmtlc" (UniqueName: "kubernetes.io/projected/d1bfff31-f462-4ddb-a583-9e723763553b-kube-api-access-mmtlc") pod "kube-proxy-7sgr7" (UID: "d1bfff31-f462-4ddb-a583-9e723763553b") : configmap "kube-root-ca.crt" not found Jul 2 07:55:45.826157 kubelet[2483]: I0702 07:55:45.826089 2483 topology_manager.go:215] "Topology Admit Handler" podUID="392973ba-576b-44e9-bf74-03e9e01db007" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-7z6pn" Jul 2 07:55:45.832554 systemd[1]: Created slice kubepods-besteffort-pod392973ba_576b_44e9_bf74_03e9e01db007.slice. Jul 2 07:55:45.882205 kubelet[2483]: I0702 07:55:45.882167 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/392973ba-576b-44e9-bf74-03e9e01db007-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-7z6pn\" (UID: \"392973ba-576b-44e9-bf74-03e9e01db007\") " pod="kube-system/cilium-operator-6bc8ccdb58-7z6pn" Jul 2 07:55:45.882487 kubelet[2483]: I0702 07:55:45.882463 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9522\" (UniqueName: \"kubernetes.io/projected/392973ba-576b-44e9-bf74-03e9e01db007-kube-api-access-c9522\") pod \"cilium-operator-6bc8ccdb58-7z6pn\" (UID: \"392973ba-576b-44e9-bf74-03e9e01db007\") " pod="kube-system/cilium-operator-6bc8ccdb58-7z6pn" Jul 2 07:55:46.137086 env[1407]: time="2024-07-02T07:55:46.136643814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7z6pn,Uid:392973ba-576b-44e9-bf74-03e9e01db007,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:46.173036 env[1407]: time="2024-07-02T07:55:46.172963878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:46.173254 env[1407]: time="2024-07-02T07:55:46.173000976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:46.173254 env[1407]: time="2024-07-02T07:55:46.173015475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:46.173254 env[1407]: time="2024-07-02T07:55:46.173165768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba pid=2562 runtime=io.containerd.runc.v2 Jul 2 07:55:46.193976 systemd[1]: Started cri-containerd-a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba.scope. Jul 2 07:55:46.236813 env[1407]: time="2024-07-02T07:55:46.235989765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-7z6pn,Uid:392973ba-576b-44e9-bf74-03e9e01db007,Namespace:kube-system,Attempt:0,} returns sandbox id \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\"" Jul 2 07:55:46.238595 env[1407]: time="2024-07-02T07:55:46.238125562Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:55:46.328394 env[1407]: time="2024-07-02T07:55:46.328335250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sgr7,Uid:d1bfff31-f462-4ddb-a583-9e723763553b,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:46.346136 env[1407]: time="2024-07-02T07:55:46.346081001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r49lb,Uid:b00c368e-cad5-4e58-9d95-3e2ae0bef877,Namespace:kube-system,Attempt:0,}" Jul 2 07:55:46.374738 env[1407]: time="2024-07-02T07:55:46.373569887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:46.374738 env[1407]: time="2024-07-02T07:55:46.373615985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:46.374738 env[1407]: time="2024-07-02T07:55:46.373629084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:46.374738 env[1407]: time="2024-07-02T07:55:46.373809876Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb7fb13606c960a02a18daa4413c2b207c3694f21a80e653b6e22899a5f847d pid=2604 runtime=io.containerd.runc.v2 Jul 2 07:55:46.388924 systemd[1]: Started cri-containerd-0fb7fb13606c960a02a18daa4413c2b207c3694f21a80e653b6e22899a5f847d.scope. Jul 2 07:55:46.404207 env[1407]: time="2024-07-02T07:55:46.404137626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:46.404391 env[1407]: time="2024-07-02T07:55:46.404206723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:46.404391 env[1407]: time="2024-07-02T07:55:46.404235621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:46.404391 env[1407]: time="2024-07-02T07:55:46.404376115Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939 pid=2636 runtime=io.containerd.runc.v2 Jul 2 07:55:46.421187 systemd[1]: Started cri-containerd-ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939.scope. Jul 2 07:55:46.442031 env[1407]: time="2024-07-02T07:55:46.441970717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7sgr7,Uid:d1bfff31-f462-4ddb-a583-9e723763553b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb7fb13606c960a02a18daa4413c2b207c3694f21a80e653b6e22899a5f847d\"" Jul 2 07:55:46.447696 env[1407]: time="2024-07-02T07:55:46.447648046Z" level=info msg="CreateContainer within sandbox \"0fb7fb13606c960a02a18daa4413c2b207c3694f21a80e653b6e22899a5f847d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:55:46.466798 env[1407]: time="2024-07-02T07:55:46.466760832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r49lb,Uid:b00c368e-cad5-4e58-9d95-3e2ae0bef877,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\"" Jul 2 07:55:46.487722 env[1407]: time="2024-07-02T07:55:46.487680332Z" level=info msg="CreateContainer within sandbox \"0fb7fb13606c960a02a18daa4413c2b207c3694f21a80e653b6e22899a5f847d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db967a6b2ff53cb02948229a19912a19a341a1191076b49b8a6237414a59e80a\"" Jul 2 07:55:46.489521 env[1407]: time="2024-07-02T07:55:46.488315202Z" level=info msg="StartContainer for \"db967a6b2ff53cb02948229a19912a19a341a1191076b49b8a6237414a59e80a\"" Jul 2 07:55:46.507310 systemd[1]: Started cri-containerd-db967a6b2ff53cb02948229a19912a19a341a1191076b49b8a6237414a59e80a.scope. Jul 2 07:55:46.541687 env[1407]: time="2024-07-02T07:55:46.541623553Z" level=info msg="StartContainer for \"db967a6b2ff53cb02948229a19912a19a341a1191076b49b8a6237414a59e80a\" returns successfully" Jul 2 07:55:47.082578 kubelet[2483]: I0702 07:55:47.080682 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7sgr7" podStartSLOduration=2.080635663 podCreationTimestamp="2024-07-02 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:55:47.080345177 +0000 UTC m=+11.154690009" watchObservedRunningTime="2024-07-02 07:55:47.080635663 +0000 UTC m=+11.154980395" Jul 2 07:55:48.168021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980517337.mount: Deactivated successfully. Jul 2 07:55:48.892063 env[1407]: time="2024-07-02T07:55:48.892009293Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:48.899523 env[1407]: time="2024-07-02T07:55:48.899475550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:48.903431 env[1407]: time="2024-07-02T07:55:48.903392371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:48.903918 env[1407]: time="2024-07-02T07:55:48.903877148Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:55:48.906568 env[1407]: time="2024-07-02T07:55:48.905893856Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:55:48.907602 env[1407]: time="2024-07-02T07:55:48.907568379Z" level=info msg="CreateContainer within sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:55:48.939864 env[1407]: time="2024-07-02T07:55:48.939826399Z" level=info msg="CreateContainer within sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\"" Jul 2 07:55:48.940416 env[1407]: time="2024-07-02T07:55:48.940390273Z" level=info msg="StartContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\"" Jul 2 07:55:48.971164 systemd[1]: Started cri-containerd-a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2.scope. Jul 2 07:55:49.002133 env[1407]: time="2024-07-02T07:55:49.002045946Z" level=info msg="StartContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" returns successfully" Jul 2 07:55:59.388129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267829705.mount: Deactivated successfully. Jul 2 07:56:02.151861 env[1407]: time="2024-07-02T07:56:02.151805926Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:02.158332 env[1407]: time="2024-07-02T07:56:02.158292397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:02.166640 env[1407]: time="2024-07-02T07:56:02.166586805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:02.167241 env[1407]: time="2024-07-02T07:56:02.167209783Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:56:02.169895 env[1407]: time="2024-07-02T07:56:02.169577299Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:56:02.198059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248819266.mount: Deactivated successfully. Jul 2 07:56:02.210845 env[1407]: time="2024-07-02T07:56:02.210800944Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\"" Jul 2 07:56:02.213007 env[1407]: time="2024-07-02T07:56:02.211661314Z" level=info msg="StartContainer for \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\"" Jul 2 07:56:02.231790 systemd[1]: Started cri-containerd-f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419.scope. Jul 2 07:56:02.273588 env[1407]: time="2024-07-02T07:56:02.273516031Z" level=info msg="StartContainer for \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\" returns successfully" Jul 2 07:56:02.277820 systemd[1]: cri-containerd-f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419.scope: Deactivated successfully. Jul 2 07:56:03.439129 kubelet[2483]: I0702 07:56:03.126342 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-7z6pn" podStartSLOduration=15.459396276 podCreationTimestamp="2024-07-02 07:55:45 +0000 UTC" firstStartedPulling="2024-07-02 07:55:46.237507792 +0000 UTC m=+10.311852624" lastFinishedPulling="2024-07-02 07:55:48.904420823 +0000 UTC m=+12.978765555" observedRunningTime="2024-07-02 07:55:49.090565167 +0000 UTC m=+13.164909999" watchObservedRunningTime="2024-07-02 07:56:03.126309207 +0000 UTC m=+27.200653939" Jul 2 07:56:03.194917 systemd[1]: run-containerd-runc-k8s.io-f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419-runc.Szd7F7.mount: Deactivated successfully. Jul 2 07:56:03.195022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419-rootfs.mount: Deactivated successfully. Jul 2 07:56:12.281666 env[1407]: time="2024-07-02T07:56:12.281592940Z" level=error msg="failed to handle container TaskExit event &TaskExit{ContainerID:f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419,ID:f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419,Pid:2895,ExitStatus:0,ExitedAt:2024-07-02 07:56:02.279674713 +0000 UTC,XXX_unrecognized:[],}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Jul 2 07:56:13.615903 env[1407]: time="2024-07-02T07:56:13.615836576Z" level=info msg="TaskExit event &TaskExit{ContainerID:f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419,ID:f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419,Pid:2895,ExitStatus:0,ExitedAt:2024-07-02 07:56:02.279674713 +0000 UTC,XXX_unrecognized:[],}" Jul 2 07:56:14.139669 env[1407]: time="2024-07-02T07:56:14.139611666Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:56:14.179626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942350497.mount: Deactivated successfully. Jul 2 07:56:14.187003 env[1407]: time="2024-07-02T07:56:14.186535991Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\"" Jul 2 07:56:14.187371 env[1407]: time="2024-07-02T07:56:14.187331868Z" level=info msg="StartContainer for \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\"" Jul 2 07:56:14.217447 systemd[1]: Started cri-containerd-897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6.scope. Jul 2 07:56:14.251446 env[1407]: time="2024-07-02T07:56:14.249704440Z" level=info msg="StartContainer for \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\" returns successfully" Jul 2 07:56:14.259299 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:56:14.260015 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:56:14.260292 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:56:14.264243 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:56:14.268056 systemd[1]: cri-containerd-897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6.scope: Deactivated successfully. Jul 2 07:56:14.278733 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:56:14.298382 env[1407]: time="2024-07-02T07:56:14.298335915Z" level=info msg="shim disconnected" id=897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6 Jul 2 07:56:14.298382 env[1407]: time="2024-07-02T07:56:14.298382214Z" level=warning msg="cleaning up after shim disconnected" id=897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6 namespace=k8s.io Jul 2 07:56:14.298754 env[1407]: time="2024-07-02T07:56:14.298393514Z" level=info msg="cleaning up dead shim" Jul 2 07:56:14.306632 env[1407]: time="2024-07-02T07:56:14.306598073Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:56:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2979 runtime=io.containerd.runc.v2\n" Jul 2 07:56:15.144448 env[1407]: time="2024-07-02T07:56:15.144387780Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:56:15.174853 systemd[1]: run-containerd-runc-k8s.io-897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6-runc.5XnyGF.mount: Deactivated successfully. Jul 2 07:56:15.174981 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6-rootfs.mount: Deactivated successfully. Jul 2 07:56:15.281116 env[1407]: time="2024-07-02T07:56:15.281060131Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\"" Jul 2 07:56:15.283229 env[1407]: time="2024-07-02T07:56:15.281841608Z" level=info msg="StartContainer for \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\"" Jul 2 07:56:15.310509 systemd[1]: Started cri-containerd-a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945.scope. Jul 2 07:56:15.352889 systemd[1]: cri-containerd-a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945.scope: Deactivated successfully. Jul 2 07:56:15.355806 env[1407]: time="2024-07-02T07:56:15.355757972Z" level=info msg="StartContainer for \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\" returns successfully" Jul 2 07:56:15.388031 env[1407]: time="2024-07-02T07:56:15.387971441Z" level=info msg="shim disconnected" id=a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945 Jul 2 07:56:15.388031 env[1407]: time="2024-07-02T07:56:15.388032239Z" level=warning msg="cleaning up after shim disconnected" id=a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945 namespace=k8s.io Jul 2 07:56:15.388340 env[1407]: time="2024-07-02T07:56:15.388045039Z" level=info msg="cleaning up dead shim" Jul 2 07:56:15.395946 env[1407]: time="2024-07-02T07:56:15.395426925Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:56:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3038 runtime=io.containerd.runc.v2\n" Jul 2 07:56:16.153602 env[1407]: time="2024-07-02T07:56:16.153502876Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:56:16.177129 systemd[1]: run-containerd-runc-k8s.io-a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945-runc.xledTo.mount: Deactivated successfully. Jul 2 07:56:16.177252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945-rootfs.mount: Deactivated successfully. Jul 2 07:56:16.200204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170831769.mount: Deactivated successfully. Jul 2 07:56:16.215960 env[1407]: time="2024-07-02T07:56:16.215918897Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\"" Jul 2 07:56:16.218008 env[1407]: time="2024-07-02T07:56:16.216617077Z" level=info msg="StartContainer for \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\"" Jul 2 07:56:16.233482 systemd[1]: Started cri-containerd-2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527.scope. Jul 2 07:56:16.262826 systemd[1]: cri-containerd-2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527.scope: Deactivated successfully. Jul 2 07:56:16.264719 env[1407]: time="2024-07-02T07:56:16.264643508Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb00c368e_cad5_4e58_9d95_3e2ae0bef877.slice/cri-containerd-2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527.scope/memory.events\": no such file or directory" Jul 2 07:56:16.393185 env[1407]: time="2024-07-02T07:56:16.393104545Z" level=info msg="StartContainer for \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\" returns successfully" Jul 2 07:56:16.429679 env[1407]: time="2024-07-02T07:56:16.428930824Z" level=info msg="shim disconnected" id=2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527 Jul 2 07:56:16.429679 env[1407]: time="2024-07-02T07:56:16.428987122Z" level=warning msg="cleaning up after shim disconnected" id=2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527 namespace=k8s.io Jul 2 07:56:16.429679 env[1407]: time="2024-07-02T07:56:16.428999422Z" level=info msg="cleaning up dead shim" Jul 2 07:56:16.436708 env[1407]: time="2024-07-02T07:56:16.436666403Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:56:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3095 runtime=io.containerd.runc.v2\n" Jul 2 07:56:17.156408 env[1407]: time="2024-07-02T07:56:17.156338445Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:56:17.203021 env[1407]: time="2024-07-02T07:56:17.202974633Z" level=info msg="CreateContainer within sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\"" Jul 2 07:56:17.204913 env[1407]: time="2024-07-02T07:56:17.203532617Z" level=info msg="StartContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\"" Jul 2 07:56:17.229388 systemd[1]: Started cri-containerd-652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98.scope. Jul 2 07:56:17.265744 env[1407]: time="2024-07-02T07:56:17.265608271Z" level=info msg="StartContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" returns successfully" Jul 2 07:56:17.386223 kubelet[2483]: I0702 07:56:17.386186 2483 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:56:17.412949 kubelet[2483]: I0702 07:56:17.412831 2483 topology_manager.go:215] "Topology Admit Handler" podUID="3cd90657-e385-4525-b374-37e4c091e954" podNamespace="kube-system" podName="coredns-5dd5756b68-gd7pr" Jul 2 07:56:17.417260 kubelet[2483]: I0702 07:56:17.417143 2483 topology_manager.go:215] "Topology Admit Handler" podUID="c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f" podNamespace="kube-system" podName="coredns-5dd5756b68-rrpgn" Jul 2 07:56:17.420448 systemd[1]: Created slice kubepods-burstable-pod3cd90657_e385_4525_b374_37e4c091e954.slice. Jul 2 07:56:17.426615 kubelet[2483]: W0702 07:56:17.426299 2483 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:56:17.426615 kubelet[2483]: E0702 07:56:17.426364 2483 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:56:17.426911 systemd[1]: Created slice kubepods-burstable-podc4fa4abd_0ab3_4d02_a7cc_92dd3bed288f.slice. Jul 2 07:56:17.495898 kubelet[2483]: I0702 07:56:17.495852 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cd90657-e385-4525-b374-37e4c091e954-config-volume\") pod \"coredns-5dd5756b68-gd7pr\" (UID: \"3cd90657-e385-4525-b374-37e4c091e954\") " pod="kube-system/coredns-5dd5756b68-gd7pr" Jul 2 07:56:17.496279 kubelet[2483]: I0702 07:56:17.496261 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59s9t\" (UniqueName: \"kubernetes.io/projected/3cd90657-e385-4525-b374-37e4c091e954-kube-api-access-59s9t\") pod \"coredns-5dd5756b68-gd7pr\" (UID: \"3cd90657-e385-4525-b374-37e4c091e954\") " pod="kube-system/coredns-5dd5756b68-gd7pr" Jul 2 07:56:17.496422 kubelet[2483]: I0702 07:56:17.496410 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f-config-volume\") pod \"coredns-5dd5756b68-rrpgn\" (UID: \"c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f\") " pod="kube-system/coredns-5dd5756b68-rrpgn" Jul 2 07:56:17.597278 kubelet[2483]: I0702 07:56:17.597239 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzl4w\" (UniqueName: \"kubernetes.io/projected/c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f-kube-api-access-bzl4w\") pod \"coredns-5dd5756b68-rrpgn\" (UID: \"c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f\") " pod="kube-system/coredns-5dd5756b68-rrpgn" Jul 2 07:56:18.598098 kubelet[2483]: E0702 07:56:18.598056 2483 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:18.598598 kubelet[2483]: E0702 07:56:18.598056 2483 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:18.598598 kubelet[2483]: E0702 07:56:18.598516 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f-config-volume podName:c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f nodeName:}" failed. No retries permitted until 2024-07-02 07:56:19.098231903 +0000 UTC m=+43.172576635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f-config-volume") pod "coredns-5dd5756b68-rrpgn" (UID: "c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:18.598598 kubelet[2483]: E0702 07:56:18.598573 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cd90657-e385-4525-b374-37e4c091e954-config-volume podName:3cd90657-e385-4525-b374-37e4c091e954 nodeName:}" failed. No retries permitted until 2024-07-02 07:56:19.098535995 +0000 UTC m=+43.172880727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cd90657-e385-4525-b374-37e4c091e954-config-volume") pod "coredns-5dd5756b68-gd7pr" (UID: "3cd90657-e385-4525-b374-37e4c091e954") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:19.225498 env[1407]: time="2024-07-02T07:56:19.225432569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gd7pr,Uid:3cd90657-e385-4525-b374-37e4c091e954,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:19.230535 env[1407]: time="2024-07-02T07:56:19.230239437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rrpgn,Uid:c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:20.174240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 07:56:20.174388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:56:20.174526 systemd-networkd[1558]: cilium_host: Link UP Jul 2 07:56:20.176521 systemd-networkd[1558]: cilium_net: Link UP Jul 2 07:56:20.179438 systemd-networkd[1558]: cilium_net: Gained carrier Jul 2 07:56:20.179831 systemd-networkd[1558]: cilium_host: Gained carrier Jul 2 07:56:20.232687 systemd-networkd[1558]: cilium_host: Gained IPv6LL Jul 2 07:56:20.333390 systemd-networkd[1558]: cilium_vxlan: Link UP Jul 2 07:56:20.333649 systemd-networkd[1558]: cilium_vxlan: Gained carrier Jul 2 07:56:20.683573 kernel: NET: Registered PF_ALG protocol family Jul 2 07:56:20.932826 systemd-networkd[1558]: cilium_net: Gained IPv6LL Jul 2 07:56:21.814630 systemd-networkd[1558]: lxc_health: Link UP Jul 2 07:56:21.832725 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:56:21.832184 systemd-networkd[1558]: lxc_health: Gained carrier Jul 2 07:56:22.212794 systemd-networkd[1558]: cilium_vxlan: Gained IPv6LL Jul 2 07:56:22.311130 systemd-networkd[1558]: lxccb36d159f8a4: Link UP Jul 2 07:56:22.320652 kernel: eth0: renamed from tmpa0aed Jul 2 07:56:22.334719 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxccb36d159f8a4: link becomes ready Jul 2 07:56:22.334198 systemd-networkd[1558]: lxccb36d159f8a4: Gained carrier Jul 2 07:56:22.364033 systemd-networkd[1558]: lxc9632423da365: Link UP Jul 2 07:56:22.372566 kernel: eth0: renamed from tmp71da9 Jul 2 07:56:22.376355 kubelet[2483]: I0702 07:56:22.376319 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r49lb" podStartSLOduration=21.676792705 podCreationTimestamp="2024-07-02 07:55:45 +0000 UTC" firstStartedPulling="2024-07-02 07:55:46.468230562 +0000 UTC m=+10.542575294" lastFinishedPulling="2024-07-02 07:56:02.167696966 +0000 UTC m=+26.242041698" observedRunningTime="2024-07-02 07:56:18.170147888 +0000 UTC m=+42.244492720" watchObservedRunningTime="2024-07-02 07:56:22.376259109 +0000 UTC m=+46.450603941" Jul 2 07:56:22.381879 systemd-networkd[1558]: lxc9632423da365: Gained carrier Jul 2 07:56:22.382603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9632423da365: link becomes ready Jul 2 07:56:23.300770 systemd-networkd[1558]: lxc_health: Gained IPv6LL Jul 2 07:56:23.428948 systemd-networkd[1558]: lxccb36d159f8a4: Gained IPv6LL Jul 2 07:56:23.620849 systemd-networkd[1558]: lxc9632423da365: Gained IPv6LL Jul 2 07:56:26.125528 env[1407]: time="2024-07-02T07:56:26.125448792Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:26.126109 env[1407]: time="2024-07-02T07:56:26.126063377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:26.126263 env[1407]: time="2024-07-02T07:56:26.126235373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:26.126659 env[1407]: time="2024-07-02T07:56:26.126616863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614 pid=3650 runtime=io.containerd.runc.v2 Jul 2 07:56:26.146027 env[1407]: time="2024-07-02T07:56:26.128375919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:26.146027 env[1407]: time="2024-07-02T07:56:26.128410118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:26.146027 env[1407]: time="2024-07-02T07:56:26.128424517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:26.146027 env[1407]: time="2024-07-02T07:56:26.128569214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a0aed5faae4302709ad4946e4cfa220b7271f095c75a11019f781fb45a83a78d pid=3655 runtime=io.containerd.runc.v2 Jul 2 07:56:26.172410 systemd[1]: Started cri-containerd-a0aed5faae4302709ad4946e4cfa220b7271f095c75a11019f781fb45a83a78d.scope. Jul 2 07:56:26.194987 systemd[1]: run-containerd-runc-k8s.io-71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614-runc.Tux6jb.mount: Deactivated successfully. Jul 2 07:56:26.202004 systemd[1]: Started cri-containerd-71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614.scope. Jul 2 07:56:26.282043 env[1407]: time="2024-07-02T07:56:26.281986546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-gd7pr,Uid:3cd90657-e385-4525-b374-37e4c091e954,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0aed5faae4302709ad4946e4cfa220b7271f095c75a11019f781fb45a83a78d\"" Jul 2 07:56:26.286608 env[1407]: time="2024-07-02T07:56:26.286527531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rrpgn,Uid:c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f,Namespace:kube-system,Attempt:0,} returns sandbox id \"71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614\"" Jul 2 07:56:26.297168 env[1407]: time="2024-07-02T07:56:26.297110064Z" level=info msg="CreateContainer within sandbox \"a0aed5faae4302709ad4946e4cfa220b7271f095c75a11019f781fb45a83a78d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:56:26.310448 env[1407]: time="2024-07-02T07:56:26.310411929Z" level=info msg="CreateContainer within sandbox \"71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:56:26.403945 env[1407]: time="2024-07-02T07:56:26.403795675Z" level=info msg="CreateContainer within sandbox \"71da9e55bb74a3eedb5ddb038b6a0fb4857c0f857231ffb4db93056b9d0a2614\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ab8150cd2d9d5a6e4d0284f698b98bec92dd0c5c5976624a865389851632a50\"" Jul 2 07:56:26.404810 env[1407]: time="2024-07-02T07:56:26.404768350Z" level=info msg="StartContainer for \"7ab8150cd2d9d5a6e4d0284f698b98bec92dd0c5c5976624a865389851632a50\"" Jul 2 07:56:26.409944 env[1407]: time="2024-07-02T07:56:26.409873221Z" level=info msg="CreateContainer within sandbox \"a0aed5faae4302709ad4946e4cfa220b7271f095c75a11019f781fb45a83a78d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4bef2fba98e26c5bfaea1ed1f44beeef40bd821c515cb262ad874cbfd73a0e70\"" Jul 2 07:56:26.412949 env[1407]: time="2024-07-02T07:56:26.411120390Z" level=info msg="StartContainer for \"4bef2fba98e26c5bfaea1ed1f44beeef40bd821c515cb262ad874cbfd73a0e70\"" Jul 2 07:56:26.435296 systemd[1]: Started cri-containerd-7ab8150cd2d9d5a6e4d0284f698b98bec92dd0c5c5976624a865389851632a50.scope. Jul 2 07:56:26.449431 systemd[1]: Started cri-containerd-4bef2fba98e26c5bfaea1ed1f44beeef40bd821c515cb262ad874cbfd73a0e70.scope. Jul 2 07:56:26.503312 env[1407]: time="2024-07-02T07:56:26.503259667Z" level=info msg="StartContainer for \"7ab8150cd2d9d5a6e4d0284f698b98bec92dd0c5c5976624a865389851632a50\" returns successfully" Jul 2 07:56:26.514322 env[1407]: time="2024-07-02T07:56:26.514251590Z" level=info msg="StartContainer for \"4bef2fba98e26c5bfaea1ed1f44beeef40bd821c515cb262ad874cbfd73a0e70\" returns successfully" Jul 2 07:56:27.207542 kubelet[2483]: I0702 07:56:27.207490 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rrpgn" podStartSLOduration=42.20743907 podCreationTimestamp="2024-07-02 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:27.193233824 +0000 UTC m=+51.267578556" watchObservedRunningTime="2024-07-02 07:56:27.20743907 +0000 UTC m=+51.281783902" Jul 2 07:57:14.568619 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:45770.service. Jul 2 07:57:15.222706 sshd[3809]: Accepted publickey for core from 10.200.16.10 port 45770 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:15.224358 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:15.228449 systemd-logind[1395]: New session 8 of user core. Jul 2 07:57:15.230699 systemd[1]: Started session-8.scope. Jul 2 07:57:15.750165 sshd[3809]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:15.753660 systemd-logind[1395]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:57:15.754027 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:45770.service: Deactivated successfully. Jul 2 07:57:15.755078 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:57:15.756014 systemd-logind[1395]: Removed session 8. Jul 2 07:57:20.860870 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:40484.service. Jul 2 07:57:21.502543 sshd[3825]: Accepted publickey for core from 10.200.16.10 port 40484 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:21.504088 sshd[3825]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:21.509140 systemd-logind[1395]: New session 9 of user core. Jul 2 07:57:21.509714 systemd[1]: Started session-9.scope. Jul 2 07:57:22.014224 sshd[3825]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:22.019154 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:40484.service: Deactivated successfully. Jul 2 07:57:22.020326 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:57:22.021231 systemd-logind[1395]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:57:22.022285 systemd-logind[1395]: Removed session 9. Jul 2 07:57:27.141899 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:40496.service. Jul 2 07:57:27.786069 sshd[3838]: Accepted publickey for core from 10.200.16.10 port 40496 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:27.787947 sshd[3838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:27.793727 systemd[1]: Started session-10.scope. Jul 2 07:57:27.794189 systemd-logind[1395]: New session 10 of user core. Jul 2 07:57:28.300766 sshd[3838]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:28.304064 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:40496.service: Deactivated successfully. Jul 2 07:57:28.305029 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:57:28.305797 systemd-logind[1395]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:57:28.306666 systemd-logind[1395]: Removed session 10. Jul 2 07:57:33.413833 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:49346.service. Jul 2 07:57:34.063972 sshd[3852]: Accepted publickey for core from 10.200.16.10 port 49346 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:34.065907 sshd[3852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:34.071280 systemd[1]: Started session-11.scope. Jul 2 07:57:34.072361 systemd-logind[1395]: New session 11 of user core. Jul 2 07:57:34.584607 sshd[3852]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:34.588210 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:49346.service: Deactivated successfully. Jul 2 07:57:34.589365 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:57:34.590217 systemd-logind[1395]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:57:34.591109 systemd-logind[1395]: Removed session 11. Jul 2 07:57:39.693892 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:43184.service. Jul 2 07:57:40.338957 sshd[3866]: Accepted publickey for core from 10.200.16.10 port 43184 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:40.340622 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:40.346500 systemd[1]: Started session-12.scope. Jul 2 07:57:40.346921 systemd-logind[1395]: New session 12 of user core. Jul 2 07:57:40.847204 sshd[3866]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:40.850923 systemd-logind[1395]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:57:40.851214 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:43184.service: Deactivated successfully. Jul 2 07:57:40.852340 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:57:40.853468 systemd-logind[1395]: Removed session 12. Jul 2 07:57:45.963741 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:43190.service. Jul 2 07:57:46.605830 sshd[3878]: Accepted publickey for core from 10.200.16.10 port 43190 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:46.607761 sshd[3878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:46.612913 systemd[1]: Started session-13.scope. Jul 2 07:57:46.613532 systemd-logind[1395]: New session 13 of user core. Jul 2 07:57:47.112733 sshd[3878]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:47.116666 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:43190.service: Deactivated successfully. Jul 2 07:57:47.117924 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:57:47.118809 systemd-logind[1395]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:57:47.119720 systemd-logind[1395]: Removed session 13. Jul 2 07:57:52.221517 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:55124.service. Jul 2 07:57:52.865767 sshd[3892]: Accepted publickey for core from 10.200.16.10 port 55124 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:52.867565 sshd[3892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:52.872875 systemd-logind[1395]: New session 14 of user core. Jul 2 07:57:52.873645 systemd[1]: Started session-14.scope. Jul 2 07:57:53.383641 sshd[3892]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:53.386601 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:55124.service: Deactivated successfully. Jul 2 07:57:53.387586 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:57:53.388337 systemd-logind[1395]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:57:53.389175 systemd-logind[1395]: Removed session 14. Jul 2 07:57:53.493110 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:55130.service. Jul 2 07:57:54.138207 sshd[3905]: Accepted publickey for core from 10.200.16.10 port 55130 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:54.140024 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:54.144524 systemd-logind[1395]: New session 15 of user core. Jul 2 07:57:54.146853 systemd[1]: Started session-15.scope. Jul 2 07:57:55.355705 sshd[3905]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:55.359707 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:55130.service: Deactivated successfully. Jul 2 07:57:55.360695 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:57:55.361427 systemd-logind[1395]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:57:55.362310 systemd-logind[1395]: Removed session 15. Jul 2 07:57:55.465630 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:55142.service. Jul 2 07:57:56.116542 sshd[3915]: Accepted publickey for core from 10.200.16.10 port 55142 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:57:56.118376 sshd[3915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:56.124144 systemd[1]: Started session-16.scope. Jul 2 07:57:56.124685 systemd-logind[1395]: New session 16 of user core. Jul 2 07:57:56.650439 sshd[3915]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:56.654040 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:55142.service: Deactivated successfully. Jul 2 07:57:56.655302 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:57:56.656203 systemd-logind[1395]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:57:56.657265 systemd-logind[1395]: Removed session 16. Jul 2 07:58:01.760609 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:35288.service. Jul 2 07:58:02.405322 sshd[3927]: Accepted publickey for core from 10.200.16.10 port 35288 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:02.406939 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:02.412343 systemd[1]: Started session-17.scope. Jul 2 07:58:02.413072 systemd-logind[1395]: New session 17 of user core. Jul 2 07:58:02.918707 sshd[3927]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:02.922403 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:35288.service: Deactivated successfully. Jul 2 07:58:02.923638 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:58:02.924495 systemd-logind[1395]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:58:02.925499 systemd-logind[1395]: Removed session 17. Jul 2 07:58:08.029354 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:35302.service. Jul 2 07:58:08.683317 sshd[3939]: Accepted publickey for core from 10.200.16.10 port 35302 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:08.684881 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:08.690196 systemd[1]: Started session-18.scope. Jul 2 07:58:08.690861 systemd-logind[1395]: New session 18 of user core. Jul 2 07:58:09.198794 sshd[3939]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:09.202371 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:35302.service: Deactivated successfully. Jul 2 07:58:09.203489 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:58:09.204242 systemd-logind[1395]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:58:09.205048 systemd-logind[1395]: Removed session 18. Jul 2 07:58:09.307786 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:48468.service. Jul 2 07:58:09.953482 sshd[3951]: Accepted publickey for core from 10.200.16.10 port 48468 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:09.955189 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:09.960817 systemd-logind[1395]: New session 19 of user core. Jul 2 07:58:09.961814 systemd[1]: Started session-19.scope. Jul 2 07:58:10.522252 sshd[3951]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:10.525831 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:48468.service: Deactivated successfully. Jul 2 07:58:10.527039 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:58:10.528074 systemd-logind[1395]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:58:10.529013 systemd-logind[1395]: Removed session 19. Jul 2 07:58:10.633122 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:48478.service. Jul 2 07:58:11.282909 sshd[3960]: Accepted publickey for core from 10.200.16.10 port 48478 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:11.284707 sshd[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:11.289981 systemd[1]: Started session-20.scope. Jul 2 07:58:11.291648 systemd-logind[1395]: New session 20 of user core. Jul 2 07:58:12.707315 sshd[3960]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:12.711338 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:48478.service: Deactivated successfully. Jul 2 07:58:12.712762 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:58:12.713884 systemd-logind[1395]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:58:12.714878 systemd-logind[1395]: Removed session 20. Jul 2 07:58:12.817614 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:48482.service. Jul 2 07:58:13.466818 sshd[3977]: Accepted publickey for core from 10.200.16.10 port 48482 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:13.468859 sshd[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:13.474679 systemd-logind[1395]: New session 21 of user core. Jul 2 07:58:13.475195 systemd[1]: Started session-21.scope. Jul 2 07:58:14.164096 sshd[3977]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:14.167309 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:48482.service: Deactivated successfully. Jul 2 07:58:14.168840 systemd-logind[1395]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:58:14.168910 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:58:14.170165 systemd-logind[1395]: Removed session 21. Jul 2 07:58:14.272977 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:48486.service. Jul 2 07:58:14.918100 sshd[3987]: Accepted publickey for core from 10.200.16.10 port 48486 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:14.919797 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:14.924825 systemd-logind[1395]: New session 22 of user core. Jul 2 07:58:14.925932 systemd[1]: Started session-22.scope. Jul 2 07:58:15.424592 sshd[3987]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:15.429024 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:48486.service: Deactivated successfully. Jul 2 07:58:15.430050 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:58:15.430806 systemd-logind[1395]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:58:15.431722 systemd-logind[1395]: Removed session 22. Jul 2 07:58:20.536195 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:42154.service. Jul 2 07:58:21.182034 sshd[4001]: Accepted publickey for core from 10.200.16.10 port 42154 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:21.183983 sshd[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:21.189781 systemd[1]: Started session-23.scope. Jul 2 07:58:21.190260 systemd-logind[1395]: New session 23 of user core. Jul 2 07:58:21.695125 sshd[4001]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:21.698246 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:42154.service: Deactivated successfully. Jul 2 07:58:21.699254 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:58:21.700062 systemd-logind[1395]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:58:21.700958 systemd-logind[1395]: Removed session 23. Jul 2 07:58:26.806816 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:42170.service. Jul 2 07:58:27.479918 sshd[4017]: Accepted publickey for core from 10.200.16.10 port 42170 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:27.481679 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:27.487473 systemd[1]: Started session-24.scope. Jul 2 07:58:27.488103 systemd-logind[1395]: New session 24 of user core. Jul 2 07:58:27.990801 sshd[4017]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:27.994822 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:42170.service: Deactivated successfully. Jul 2 07:58:27.995755 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:58:27.996797 systemd-logind[1395]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:58:27.998079 systemd-logind[1395]: Removed session 24. Jul 2 07:58:33.101739 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:33586.service. Jul 2 07:58:33.745695 sshd[4029]: Accepted publickey for core from 10.200.16.10 port 33586 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:33.747433 sshd[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:33.753295 systemd[1]: Started session-25.scope. Jul 2 07:58:33.754054 systemd-logind[1395]: New session 25 of user core. Jul 2 07:58:34.268776 sshd[4029]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:34.272472 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:33586.service: Deactivated successfully. Jul 2 07:58:34.273684 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:58:34.274536 systemd-logind[1395]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:58:34.275665 systemd-logind[1395]: Removed session 25. Jul 2 07:58:34.379116 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:33600.service. Jul 2 07:58:35.023314 sshd[4041]: Accepted publickey for core from 10.200.16.10 port 33600 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:35.024963 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:35.030523 systemd[1]: Started session-26.scope. Jul 2 07:58:35.031235 systemd-logind[1395]: New session 26 of user core. Jul 2 07:58:37.208093 kubelet[2483]: I0702 07:58:37.208037 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gd7pr" podStartSLOduration=172.207980569 podCreationTimestamp="2024-07-02 07:55:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:27.222401797 +0000 UTC m=+51.296746629" watchObservedRunningTime="2024-07-02 07:58:37.207980569 +0000 UTC m=+181.282325401" Jul 2 07:58:37.216852 env[1407]: time="2024-07-02T07:58:37.216788837Z" level=info msg="StopContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" with timeout 30 (s)" Jul 2 07:58:37.217623 env[1407]: time="2024-07-02T07:58:37.217516642Z" level=info msg="Stop container \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" with signal terminated" Jul 2 07:58:37.231982 systemd[1]: run-containerd-runc-k8s.io-652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98-runc.CqQ9RL.mount: Deactivated successfully. Jul 2 07:58:37.249165 systemd[1]: cri-containerd-a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2.scope: Deactivated successfully. Jul 2 07:58:37.262760 env[1407]: time="2024-07-02T07:58:37.262684788Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:58:37.273516 env[1407]: time="2024-07-02T07:58:37.273480270Z" level=info msg="StopContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" with timeout 2 (s)" Jul 2 07:58:37.273970 env[1407]: time="2024-07-02T07:58:37.273935674Z" level=info msg="Stop container \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" with signal terminated" Jul 2 07:58:37.280158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2-rootfs.mount: Deactivated successfully. Jul 2 07:58:37.289454 systemd-networkd[1558]: lxc_health: Link DOWN Jul 2 07:58:37.289463 systemd-networkd[1558]: lxc_health: Lost carrier Jul 2 07:58:37.311019 systemd[1]: cri-containerd-652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98.scope: Deactivated successfully. Jul 2 07:58:37.311306 systemd[1]: cri-containerd-652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98.scope: Consumed 7.216s CPU time. Jul 2 07:58:37.322065 env[1407]: time="2024-07-02T07:58:37.322019142Z" level=info msg="shim disconnected" id=a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2 Jul 2 07:58:37.322211 env[1407]: time="2024-07-02T07:58:37.322066642Z" level=warning msg="cleaning up after shim disconnected" id=a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2 namespace=k8s.io Jul 2 07:58:37.322211 env[1407]: time="2024-07-02T07:58:37.322079442Z" level=info msg="cleaning up dead shim" Jul 2 07:58:37.332848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98-rootfs.mount: Deactivated successfully. Jul 2 07:58:37.338502 env[1407]: time="2024-07-02T07:58:37.338472168Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4102 runtime=io.containerd.runc.v2\n" Jul 2 07:58:37.346403 env[1407]: time="2024-07-02T07:58:37.346370028Z" level=info msg="StopContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" returns successfully" Jul 2 07:58:37.350449 env[1407]: time="2024-07-02T07:58:37.347161434Z" level=info msg="StopPodSandbox for \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\"" Jul 2 07:58:37.350449 env[1407]: time="2024-07-02T07:58:37.347237935Z" level=info msg="Container to stop \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.349309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba-shm.mount: Deactivated successfully. Jul 2 07:58:37.357484 systemd[1]: cri-containerd-a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba.scope: Deactivated successfully. Jul 2 07:58:37.360728 env[1407]: time="2024-07-02T07:58:37.360682938Z" level=info msg="shim disconnected" id=652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98 Jul 2 07:58:37.360953 env[1407]: time="2024-07-02T07:58:37.360928540Z" level=warning msg="cleaning up after shim disconnected" id=652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98 namespace=k8s.io Jul 2 07:58:37.361050 env[1407]: time="2024-07-02T07:58:37.361034640Z" level=info msg="cleaning up dead shim" Jul 2 07:58:37.377772 env[1407]: time="2024-07-02T07:58:37.377717968Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4129 runtime=io.containerd.runc.v2\n" Jul 2 07:58:37.383470 env[1407]: time="2024-07-02T07:58:37.383424712Z" level=info msg="StopContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" returns successfully" Jul 2 07:58:37.384115 env[1407]: time="2024-07-02T07:58:37.384081917Z" level=info msg="StopPodSandbox for \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\"" Jul 2 07:58:37.384249 env[1407]: time="2024-07-02T07:58:37.384159717Z" level=info msg="Container to stop \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.384249 env[1407]: time="2024-07-02T07:58:37.384196618Z" level=info msg="Container to stop \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.384249 env[1407]: time="2024-07-02T07:58:37.384212618Z" level=info msg="Container to stop \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.384249 env[1407]: time="2024-07-02T07:58:37.384226918Z" level=info msg="Container to stop \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.384493 env[1407]: time="2024-07-02T07:58:37.384258618Z" level=info msg="Container to stop \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:37.392000 systemd[1]: cri-containerd-ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939.scope: Deactivated successfully. Jul 2 07:58:37.408661 env[1407]: time="2024-07-02T07:58:37.408538104Z" level=info msg="shim disconnected" id=a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba Jul 2 07:58:37.408902 env[1407]: time="2024-07-02T07:58:37.408878606Z" level=warning msg="cleaning up after shim disconnected" id=a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba namespace=k8s.io Jul 2 07:58:37.408991 env[1407]: time="2024-07-02T07:58:37.408973607Z" level=info msg="cleaning up dead shim" Jul 2 07:58:37.418378 env[1407]: time="2024-07-02T07:58:37.418343679Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4173 runtime=io.containerd.runc.v2\n" Jul 2 07:58:37.418982 env[1407]: time="2024-07-02T07:58:37.418672781Z" level=info msg="TearDown network for sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" successfully" Jul 2 07:58:37.418982 env[1407]: time="2024-07-02T07:58:37.418696982Z" level=info msg="StopPodSandbox for \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" returns successfully" Jul 2 07:58:37.420280 env[1407]: time="2024-07-02T07:58:37.420055592Z" level=info msg="shim disconnected" id=ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939 Jul 2 07:58:37.420280 env[1407]: time="2024-07-02T07:58:37.420101292Z" level=warning msg="cleaning up after shim disconnected" id=ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939 namespace=k8s.io Jul 2 07:58:37.420280 env[1407]: time="2024-07-02T07:58:37.420114392Z" level=info msg="cleaning up dead shim" Jul 2 07:58:37.423744 env[1407]: time="2024-07-02T07:58:37.423658020Z" level=info msg="shim disconnected" id=f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419 Jul 2 07:58:37.423744 env[1407]: time="2024-07-02T07:58:37.423701020Z" level=warning msg="cleaning up after shim disconnected" id=f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419 namespace=k8s.io Jul 2 07:58:37.423744 env[1407]: time="2024-07-02T07:58:37.423711520Z" level=info msg="cleaning up dead shim" Jul 2 07:58:37.440165 env[1407]: time="2024-07-02T07:58:37.440126246Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4185 runtime=io.containerd.runc.v2\n" Jul 2 07:58:37.440504 env[1407]: time="2024-07-02T07:58:37.440468948Z" level=info msg="TearDown network for sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" successfully" Jul 2 07:58:37.440616 env[1407]: time="2024-07-02T07:58:37.440505249Z" level=info msg="StopPodSandbox for \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" returns successfully" Jul 2 07:58:37.443916 env[1407]: time="2024-07-02T07:58:37.443887574Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4191 runtime=io.containerd.runc.v2\n" Jul 2 07:58:37.478648 kubelet[2483]: I0702 07:58:37.478324 2483 scope.go:117] "RemoveContainer" containerID="652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98" Jul 2 07:58:37.480701 env[1407]: time="2024-07-02T07:58:37.480658556Z" level=info msg="RemoveContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\"" Jul 2 07:58:37.487167 env[1407]: time="2024-07-02T07:58:37.487128305Z" level=info msg="RemoveContainer for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" returns successfully" Jul 2 07:58:37.487385 kubelet[2483]: I0702 07:58:37.487362 2483 scope.go:117] "RemoveContainer" containerID="2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527" Jul 2 07:58:37.488535 env[1407]: time="2024-07-02T07:58:37.488508416Z" level=info msg="RemoveContainer for \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\"" Jul 2 07:58:37.496480 env[1407]: time="2024-07-02T07:58:37.496442777Z" level=info msg="RemoveContainer for \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\" returns successfully" Jul 2 07:58:37.496686 kubelet[2483]: I0702 07:58:37.496658 2483 scope.go:117] "RemoveContainer" containerID="a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945" Jul 2 07:58:37.497834 env[1407]: time="2024-07-02T07:58:37.497805287Z" level=info msg="RemoveContainer for \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\"" Jul 2 07:58:37.506327 env[1407]: time="2024-07-02T07:58:37.506296152Z" level=info msg="RemoveContainer for \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\" returns successfully" Jul 2 07:58:37.506453 kubelet[2483]: I0702 07:58:37.506435 2483 scope.go:117] "RemoveContainer" containerID="897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6" Jul 2 07:58:37.507373 env[1407]: time="2024-07-02T07:58:37.507338460Z" level=info msg="RemoveContainer for \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\"" Jul 2 07:58:37.514838 env[1407]: time="2024-07-02T07:58:37.514806017Z" level=info msg="RemoveContainer for \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\" returns successfully" Jul 2 07:58:37.515014 kubelet[2483]: I0702 07:58:37.514995 2483 scope.go:117] "RemoveContainer" containerID="f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419" Jul 2 07:58:37.516099 env[1407]: time="2024-07-02T07:58:37.516072127Z" level=info msg="RemoveContainer for \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\"" Jul 2 07:58:37.522660 env[1407]: time="2024-07-02T07:58:37.522631477Z" level=info msg="RemoveContainer for \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\" returns successfully" Jul 2 07:58:37.522838 kubelet[2483]: I0702 07:58:37.522820 2483 scope.go:117] "RemoveContainer" containerID="652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98" Jul 2 07:58:37.523140 env[1407]: time="2024-07-02T07:58:37.523074380Z" level=error msg="ContainerStatus for \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\": not found" Jul 2 07:58:37.523310 kubelet[2483]: E0702 07:58:37.523287 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\": not found" containerID="652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98" Jul 2 07:58:37.523413 kubelet[2483]: I0702 07:58:37.523396 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98"} err="failed to get container status \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\": rpc error: code = NotFound desc = an error occurred when try to find container \"652fdec76f39b6bc8254aaca9a8609fdd092de7ae3ac4a9854c658abe184ab98\": not found" Jul 2 07:58:37.523489 kubelet[2483]: I0702 07:58:37.523417 2483 scope.go:117] "RemoveContainer" containerID="2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527" Jul 2 07:58:37.523656 env[1407]: time="2024-07-02T07:58:37.523611684Z" level=error msg="ContainerStatus for \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\": not found" Jul 2 07:58:37.523790 kubelet[2483]: E0702 07:58:37.523771 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\": not found" containerID="2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527" Jul 2 07:58:37.523870 kubelet[2483]: I0702 07:58:37.523808 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527"} err="failed to get container status \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d8f3a32cec34c119b5c5a7653510b5abed79834183c94b1acb71f41fcdf8527\": not found" Jul 2 07:58:37.523870 kubelet[2483]: I0702 07:58:37.523827 2483 scope.go:117] "RemoveContainer" containerID="a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945" Jul 2 07:58:37.524042 env[1407]: time="2024-07-02T07:58:37.523996787Z" level=error msg="ContainerStatus for \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\": not found" Jul 2 07:58:37.524191 kubelet[2483]: E0702 07:58:37.524173 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\": not found" containerID="a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945" Jul 2 07:58:37.524275 kubelet[2483]: I0702 07:58:37.524202 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945"} err="failed to get container status \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\": rpc error: code = NotFound desc = an error occurred when try to find container \"a3e018b2820f7fcaf8e13d9917a3e6dfa6a3da9a86b997d4baf85bcf00f68945\": not found" Jul 2 07:58:37.524275 kubelet[2483]: I0702 07:58:37.524216 2483 scope.go:117] "RemoveContainer" containerID="897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6" Jul 2 07:58:37.524456 env[1407]: time="2024-07-02T07:58:37.524398690Z" level=error msg="ContainerStatus for \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\": not found" Jul 2 07:58:37.524657 kubelet[2483]: E0702 07:58:37.524639 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\": not found" containerID="897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6" Jul 2 07:58:37.524748 kubelet[2483]: I0702 07:58:37.524683 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6"} err="failed to get container status \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"897c49934e1c98c6cb09ce6c8f0271f1da0794975c18cc5abe63ffaf8323a3d6\": not found" Jul 2 07:58:37.524748 kubelet[2483]: I0702 07:58:37.524698 2483 scope.go:117] "RemoveContainer" containerID="f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419" Jul 2 07:58:37.524942 env[1407]: time="2024-07-02T07:58:37.524896594Z" level=error msg="ContainerStatus for \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\": not found" Jul 2 07:58:37.525083 kubelet[2483]: E0702 07:58:37.525053 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\": not found" containerID="f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419" Jul 2 07:58:37.525160 kubelet[2483]: I0702 07:58:37.525093 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419"} err="failed to get container status \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\": rpc error: code = NotFound desc = an error occurred when try to find container \"f97b3a562e3349d7ef3c2a269a55f16f1cd33d8db53fc17ce04c5a70e8e59419\": not found" Jul 2 07:58:37.525160 kubelet[2483]: I0702 07:58:37.525105 2483 scope.go:117] "RemoveContainer" containerID="a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2" Jul 2 07:58:37.526196 env[1407]: time="2024-07-02T07:58:37.526167004Z" level=info msg="RemoveContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\"" Jul 2 07:58:37.535582 env[1407]: time="2024-07-02T07:58:37.535533576Z" level=info msg="RemoveContainer for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" returns successfully" Jul 2 07:58:37.535794 kubelet[2483]: I0702 07:58:37.535774 2483 scope.go:117] "RemoveContainer" containerID="a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2" Jul 2 07:58:37.536060 env[1407]: time="2024-07-02T07:58:37.535991579Z" level=error msg="ContainerStatus for \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\": not found" Jul 2 07:58:37.536227 kubelet[2483]: E0702 07:58:37.536208 2483 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\": not found" containerID="a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2" Jul 2 07:58:37.536299 kubelet[2483]: I0702 07:58:37.536241 2483 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2"} err="failed to get container status \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8517e8faf8cdea143b217dc4b3a8530094504d738839209166de0a6343ccef2\": not found" Jul 2 07:58:37.576567 kubelet[2483]: I0702 07:58:37.576518 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hubble-tls\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576751 kubelet[2483]: I0702 07:58:37.576595 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b00c368e-cad5-4e58-9d95-3e2ae0bef877-clustermesh-secrets\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576751 kubelet[2483]: I0702 07:58:37.576622 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-etc-cni-netd\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576751 kubelet[2483]: I0702 07:58:37.576661 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-run\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576751 kubelet[2483]: I0702 07:58:37.576690 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/392973ba-576b-44e9-bf74-03e9e01db007-cilium-config-path\") pod \"392973ba-576b-44e9-bf74-03e9e01db007\" (UID: \"392973ba-576b-44e9-bf74-03e9e01db007\") " Jul 2 07:58:37.576751 kubelet[2483]: I0702 07:58:37.576731 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfmlh\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576984 kubelet[2483]: I0702 07:58:37.576759 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-kernel\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576984 kubelet[2483]: I0702 07:58:37.576784 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cni-path\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576984 kubelet[2483]: I0702 07:58:37.576882 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-xtables-lock\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.576984 kubelet[2483]: I0702 07:58:37.576933 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9522\" (UniqueName: \"kubernetes.io/projected/392973ba-576b-44e9-bf74-03e9e01db007-kube-api-access-c9522\") pod \"392973ba-576b-44e9-bf74-03e9e01db007\" (UID: \"392973ba-576b-44e9-bf74-03e9e01db007\") " Jul 2 07:58:37.576984 kubelet[2483]: I0702 07:58:37.576962 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hostproc\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.576990 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-bpf-maps\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.577031 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-net\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.577061 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-config-path\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.577102 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-lib-modules\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.577130 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-cgroup\") pod \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\" (UID: \"b00c368e-cad5-4e58-9d95-3e2ae0bef877\") " Jul 2 07:58:37.579852 kubelet[2483]: I0702 07:58:37.577209 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580104 kubelet[2483]: I0702 07:58:37.577268 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580104 kubelet[2483]: I0702 07:58:37.577293 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cni-path" (OuterVolumeSpecName: "cni-path") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580104 kubelet[2483]: I0702 07:58:37.577313 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580104 kubelet[2483]: I0702 07:58:37.578056 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hostproc" (OuterVolumeSpecName: "hostproc") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580104 kubelet[2483]: I0702 07:58:37.578107 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.580324 kubelet[2483]: I0702 07:58:37.578129 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.581108 kubelet[2483]: I0702 07:58:37.581067 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:37.581445 kubelet[2483]: I0702 07:58:37.581411 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.581606 kubelet[2483]: I0702 07:58:37.581587 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.582041 kubelet[2483]: I0702 07:58:37.582015 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:37.583734 kubelet[2483]: I0702 07:58:37.583693 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh" (OuterVolumeSpecName: "kube-api-access-vfmlh") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "kube-api-access-vfmlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:37.585268 kubelet[2483]: I0702 07:58:37.585223 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:37.585625 kubelet[2483]: I0702 07:58:37.585596 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392973ba-576b-44e9-bf74-03e9e01db007-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "392973ba-576b-44e9-bf74-03e9e01db007" (UID: "392973ba-576b-44e9-bf74-03e9e01db007"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:37.588645 kubelet[2483]: I0702 07:58:37.588616 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b00c368e-cad5-4e58-9d95-3e2ae0bef877-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b00c368e-cad5-4e58-9d95-3e2ae0bef877" (UID: "b00c368e-cad5-4e58-9d95-3e2ae0bef877"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:37.588735 kubelet[2483]: I0702 07:58:37.588615 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392973ba-576b-44e9-bf74-03e9e01db007-kube-api-access-c9522" (OuterVolumeSpecName: "kube-api-access-c9522") pod "392973ba-576b-44e9-bf74-03e9e01db007" (UID: "392973ba-576b-44e9-bf74-03e9e01db007"). InnerVolumeSpecName "kube-api-access-c9522". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:37.678160 kubelet[2483]: I0702 07:58:37.678120 2483 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cni-path\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678160 kubelet[2483]: I0702 07:58:37.678156 2483 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-xtables-lock\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678160 kubelet[2483]: I0702 07:58:37.678172 2483 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c9522\" (UniqueName: \"kubernetes.io/projected/392973ba-576b-44e9-bf74-03e9e01db007-kube-api-access-c9522\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678187 2483 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hostproc\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678201 2483 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-bpf-maps\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678213 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-cgroup\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678226 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-net\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678239 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-config-path\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678251 2483 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-lib-modules\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678266 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-cilium-run\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678449 kubelet[2483]: I0702 07:58:37.678277 2483 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-hubble-tls\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678678 kubelet[2483]: I0702 07:58:37.678301 2483 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b00c368e-cad5-4e58-9d95-3e2ae0bef877-clustermesh-secrets\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678678 kubelet[2483]: I0702 07:58:37.678315 2483 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-etc-cni-netd\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678678 kubelet[2483]: I0702 07:58:37.678330 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b00c368e-cad5-4e58-9d95-3e2ae0bef877-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678678 kubelet[2483]: I0702 07:58:37.678345 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/392973ba-576b-44e9-bf74-03e9e01db007-cilium-config-path\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.678678 kubelet[2483]: I0702 07:58:37.678359 2483 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vfmlh\" (UniqueName: \"kubernetes.io/projected/b00c368e-cad5-4e58-9d95-3e2ae0bef877-kube-api-access-vfmlh\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:37.782203 systemd[1]: Removed slice kubepods-burstable-podb00c368e_cad5_4e58_9d95_3e2ae0bef877.slice. Jul 2 07:58:37.782338 systemd[1]: kubepods-burstable-podb00c368e_cad5_4e58_9d95_3e2ae0bef877.slice: Consumed 7.327s CPU time. Jul 2 07:58:37.789502 systemd[1]: Removed slice kubepods-besteffort-pod392973ba_576b_44e9_bf74_03e9e01db007.slice. Jul 2 07:58:38.016460 kubelet[2483]: I0702 07:58:38.016414 2483 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="392973ba-576b-44e9-bf74-03e9e01db007" path="/var/lib/kubelet/pods/392973ba-576b-44e9-bf74-03e9e01db007/volumes" Jul 2 07:58:38.017080 kubelet[2483]: I0702 07:58:38.017053 2483 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" path="/var/lib/kubelet/pods/b00c368e-cad5-4e58-9d95-3e2ae0bef877/volumes" Jul 2 07:58:38.228218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939-rootfs.mount: Deactivated successfully. Jul 2 07:58:38.228367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939-shm.mount: Deactivated successfully. Jul 2 07:58:38.228467 systemd[1]: var-lib-kubelet-pods-b00c368e\x2dcad5\x2d4e58\x2d9d95\x2d3e2ae0bef877-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvfmlh.mount: Deactivated successfully. Jul 2 07:58:38.228579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba-rootfs.mount: Deactivated successfully. Jul 2 07:58:38.228693 systemd[1]: var-lib-kubelet-pods-392973ba\x2d576b\x2d44e9\x2dbf74\x2d03e9e01db007-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc9522.mount: Deactivated successfully. Jul 2 07:58:38.228794 systemd[1]: var-lib-kubelet-pods-b00c368e\x2dcad5\x2d4e58\x2d9d95\x2d3e2ae0bef877-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:38.228893 systemd[1]: var-lib-kubelet-pods-b00c368e\x2dcad5\x2d4e58\x2d9d95\x2d3e2ae0bef877-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:39.274191 sshd[4041]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:39.278109 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:33600.service: Deactivated successfully. Jul 2 07:58:39.279237 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:58:39.279481 systemd[1]: session-26.scope: Consumed 1.255s CPU time. Jul 2 07:58:39.280242 systemd-logind[1395]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:58:39.281356 systemd-logind[1395]: Removed session 26. Jul 2 07:58:39.385763 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:48518.service. Jul 2 07:58:40.035976 sshd[4217]: Accepted publickey for core from 10.200.16.10 port 48518 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:40.037787 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:40.042604 systemd-logind[1395]: New session 27 of user core. Jul 2 07:58:40.043776 systemd[1]: Started session-27.scope. Jul 2 07:58:41.002760 kubelet[2483]: I0702 07:58:41.002711 2483 topology_manager.go:215] "Topology Admit Handler" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" podNamespace="kube-system" podName="cilium-72rxd" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002802 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="mount-cgroup" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002826 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="apply-sysctl-overwrites" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002837 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="clean-cilium-state" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002845 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="cilium-agent" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002854 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="392973ba-576b-44e9-bf74-03e9e01db007" containerName="cilium-operator" Jul 2 07:58:41.003274 kubelet[2483]: E0702 07:58:41.002865 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="mount-bpf-fs" Jul 2 07:58:41.003274 kubelet[2483]: I0702 07:58:41.002909 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="392973ba-576b-44e9-bf74-03e9e01db007" containerName="cilium-operator" Jul 2 07:58:41.003274 kubelet[2483]: I0702 07:58:41.002921 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="b00c368e-cad5-4e58-9d95-3e2ae0bef877" containerName="cilium-agent" Jul 2 07:58:41.009861 systemd[1]: Created slice kubepods-burstable-pod1541cb55_6b28_4405_bc9d_b934ac4b0b57.slice. Jul 2 07:58:41.097215 kubelet[2483]: I0702 07:58:41.097120 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-clustermesh-secrets\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097454 kubelet[2483]: I0702 07:58:41.097232 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-config-path\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097454 kubelet[2483]: I0702 07:58:41.097259 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqlhd\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-kube-api-access-cqlhd\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097454 kubelet[2483]: I0702 07:58:41.097405 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-bpf-maps\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097454 kubelet[2483]: I0702 07:58:41.097431 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-xtables-lock\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097690 kubelet[2483]: I0702 07:58:41.097480 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-cgroup\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097690 kubelet[2483]: I0702 07:58:41.097508 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-etc-cni-netd\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097690 kubelet[2483]: I0702 07:58:41.097601 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-kernel\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097690 kubelet[2483]: I0702 07:58:41.097664 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-run\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097868 kubelet[2483]: I0702 07:58:41.097764 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hostproc\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097868 kubelet[2483]: I0702 07:58:41.097800 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cni-path\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097868 kubelet[2483]: I0702 07:58:41.097855 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-net\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097995 kubelet[2483]: I0702 07:58:41.097883 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hubble-tls\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.097995 kubelet[2483]: I0702 07:58:41.097977 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-lib-modules\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.098085 kubelet[2483]: I0702 07:58:41.098008 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-ipsec-secrets\") pod \"cilium-72rxd\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " pod="kube-system/cilium-72rxd" Jul 2 07:58:41.126102 sshd[4217]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:41.129931 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:48518.service: Deactivated successfully. Jul 2 07:58:41.131124 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 07:58:41.133088 kubelet[2483]: E0702 07:58:41.133061 2483 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:41.133830 systemd-logind[1395]: Session 27 logged out. Waiting for processes to exit. Jul 2 07:58:41.135251 systemd-logind[1395]: Removed session 27. Jul 2 07:58:41.238006 systemd[1]: Started sshd@25-10.200.8.39:22-10.200.16.10:48524.service. Jul 2 07:58:41.317010 env[1407]: time="2024-07-02T07:58:41.316489313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72rxd,Uid:1541cb55-6b28-4405-bc9d-b934ac4b0b57,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:41.352395 env[1407]: time="2024-07-02T07:58:41.352320552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:41.352395 env[1407]: time="2024-07-02T07:58:41.352356752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:41.352694 env[1407]: time="2024-07-02T07:58:41.352648654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:41.352964 env[1407]: time="2024-07-02T07:58:41.352919856Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb pid=4241 runtime=io.containerd.runc.v2 Jul 2 07:58:41.366649 systemd[1]: Started cri-containerd-768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb.scope. Jul 2 07:58:41.392670 env[1407]: time="2024-07-02T07:58:41.392617221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-72rxd,Uid:1541cb55-6b28-4405-bc9d-b934ac4b0b57,Namespace:kube-system,Attempt:0,} returns sandbox id \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\"" Jul 2 07:58:41.397605 env[1407]: time="2024-07-02T07:58:41.397531654Z" level=info msg="CreateContainer within sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:41.436975 env[1407]: time="2024-07-02T07:58:41.436909016Z" level=info msg="CreateContainer within sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\"" Jul 2 07:58:41.438936 env[1407]: time="2024-07-02T07:58:41.437799322Z" level=info msg="StartContainer for \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\"" Jul 2 07:58:41.455138 systemd[1]: Started cri-containerd-3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b.scope. Jul 2 07:58:41.471561 systemd[1]: cri-containerd-3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b.scope: Deactivated successfully. Jul 2 07:58:41.561305 env[1407]: time="2024-07-02T07:58:41.561232446Z" level=info msg="shim disconnected" id=3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b Jul 2 07:58:41.561305 env[1407]: time="2024-07-02T07:58:41.561301047Z" level=warning msg="cleaning up after shim disconnected" id=3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b namespace=k8s.io Jul 2 07:58:41.561305 env[1407]: time="2024-07-02T07:58:41.561313647Z" level=info msg="cleaning up dead shim" Jul 2 07:58:41.569749 env[1407]: time="2024-07-02T07:58:41.569628602Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4299 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:41.570577 env[1407]: time="2024-07-02T07:58:41.570381307Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Jul 2 07:58:41.571654 env[1407]: time="2024-07-02T07:58:41.571604815Z" level=error msg="Failed to pipe stdout of container \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\"" error="reading from a closed fifo" Jul 2 07:58:41.574719 env[1407]: time="2024-07-02T07:58:41.574668136Z" level=error msg="Failed to pipe stderr of container \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\"" error="reading from a closed fifo" Jul 2 07:58:41.578663 env[1407]: time="2024-07-02T07:58:41.578598762Z" level=error msg="StartContainer for \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:41.579166 kubelet[2483]: E0702 07:58:41.578916 2483 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b" Jul 2 07:58:41.579166 kubelet[2483]: E0702 07:58:41.579077 2483 kuberuntime_manager.go:1261] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:41.579166 kubelet[2483]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:41.579166 kubelet[2483]: rm /hostbin/cilium-mount Jul 2 07:58:41.579422 kubelet[2483]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cqlhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-72rxd_kube-system(1541cb55-6b28-4405-bc9d-b934ac4b0b57): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:41.579540 kubelet[2483]: E0702 07:58:41.579131 2483 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-72rxd" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" Jul 2 07:58:41.891648 sshd[4231]: Accepted publickey for core from 10.200.16.10 port 48524 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:41.893610 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:41.898628 systemd-logind[1395]: New session 28 of user core. Jul 2 07:58:41.899425 systemd[1]: Started session-28.scope. Jul 2 07:58:42.425318 sshd[4231]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:42.429462 systemd-logind[1395]: Session 28 logged out. Waiting for processes to exit. Jul 2 07:58:42.429874 systemd[1]: sshd@25-10.200.8.39:22-10.200.16.10:48524.service: Deactivated successfully. Jul 2 07:58:42.430834 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 07:58:42.432143 systemd-logind[1395]: Removed session 28. Jul 2 07:58:42.503062 env[1407]: time="2024-07-02T07:58:42.503009813Z" level=info msg="CreateContainer within sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 07:58:42.532719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount120577037.mount: Deactivated successfully. Jul 2 07:58:42.542723 systemd[1]: Started sshd@26-10.200.8.39:22-10.200.16.10:48540.service. Jul 2 07:58:42.548472 env[1407]: time="2024-07-02T07:58:42.548407005Z" level=info msg="CreateContainer within sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\"" Jul 2 07:58:42.550519 env[1407]: time="2024-07-02T07:58:42.550487219Z" level=info msg="StartContainer for \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\"" Jul 2 07:58:42.572882 systemd[1]: Started cri-containerd-99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135.scope. Jul 2 07:58:42.585115 systemd[1]: cri-containerd-99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135.scope: Deactivated successfully. Jul 2 07:58:42.604648 env[1407]: time="2024-07-02T07:58:42.604537367Z" level=info msg="shim disconnected" id=99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135 Jul 2 07:58:42.604860 env[1407]: time="2024-07-02T07:58:42.604654368Z" level=warning msg="cleaning up after shim disconnected" id=99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135 namespace=k8s.io Jul 2 07:58:42.604860 env[1407]: time="2024-07-02T07:58:42.604677168Z" level=info msg="cleaning up dead shim" Jul 2 07:58:42.614061 env[1407]: time="2024-07-02T07:58:42.614023128Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4352 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:42.614373 env[1407]: time="2024-07-02T07:58:42.614311630Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Jul 2 07:58:42.614664 env[1407]: time="2024-07-02T07:58:42.614615532Z" level=error msg="Failed to pipe stderr of container \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\"" error="reading from a closed fifo" Jul 2 07:58:42.614664 env[1407]: time="2024-07-02T07:58:42.614616432Z" level=error msg="Failed to pipe stdout of container \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\"" error="reading from a closed fifo" Jul 2 07:58:42.620416 env[1407]: time="2024-07-02T07:58:42.620355969Z" level=error msg="StartContainer for \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:42.620709 kubelet[2483]: E0702 07:58:42.620687 2483 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135" Jul 2 07:58:42.621064 kubelet[2483]: E0702 07:58:42.620824 2483 kuberuntime_manager.go:1261] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:42.621064 kubelet[2483]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:42.621064 kubelet[2483]: rm /hostbin/cilium-mount Jul 2 07:58:42.621175 kubelet[2483]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cqlhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-72rxd_kube-system(1541cb55-6b28-4405-bc9d-b934ac4b0b57): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:42.621175 kubelet[2483]: E0702 07:58:42.620878 2483 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-72rxd" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" Jul 2 07:58:43.197421 sshd[4323]: Accepted publickey for core from 10.200.16.10 port 48540 ssh2: RSA SHA256:rMFzF1f+VHcPwzXfxcw29Fm3hFOpXl45tnQNe1IK4iE Jul 2 07:58:43.199305 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:43.210352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135-rootfs.mount: Deactivated successfully. Jul 2 07:58:43.212129 systemd-logind[1395]: New session 29 of user core. Jul 2 07:58:43.212894 systemd[1]: Started session-29.scope. Jul 2 07:58:43.504654 kubelet[2483]: I0702 07:58:43.504622 2483 scope.go:117] "RemoveContainer" containerID="3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b" Jul 2 07:58:43.505358 env[1407]: time="2024-07-02T07:58:43.505301450Z" level=info msg="StopPodSandbox for \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\"" Jul 2 07:58:43.505887 env[1407]: time="2024-07-02T07:58:43.505841153Z" level=info msg="Container to stop \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:43.506027 env[1407]: time="2024-07-02T07:58:43.506006554Z" level=info msg="Container to stop \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:43.509828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb-shm.mount: Deactivated successfully. Jul 2 07:58:43.511505 env[1407]: time="2024-07-02T07:58:43.511468388Z" level=info msg="RemoveContainer for \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\"" Jul 2 07:58:43.528317 env[1407]: time="2024-07-02T07:58:43.528273292Z" level=info msg="RemoveContainer for \"3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b\" returns successfully" Jul 2 07:58:43.530428 systemd[1]: cri-containerd-768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb.scope: Deactivated successfully. Jul 2 07:58:43.550979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb-rootfs.mount: Deactivated successfully. Jul 2 07:58:43.567741 env[1407]: time="2024-07-02T07:58:43.567683137Z" level=info msg="shim disconnected" id=768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb Jul 2 07:58:43.568039 env[1407]: time="2024-07-02T07:58:43.568012339Z" level=warning msg="cleaning up after shim disconnected" id=768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb namespace=k8s.io Jul 2 07:58:43.568170 env[1407]: time="2024-07-02T07:58:43.568152340Z" level=info msg="cleaning up dead shim" Jul 2 07:58:43.577723 env[1407]: time="2024-07-02T07:58:43.577684399Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4391 runtime=io.containerd.runc.v2\n" Jul 2 07:58:43.578093 env[1407]: time="2024-07-02T07:58:43.578058401Z" level=info msg="TearDown network for sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" successfully" Jul 2 07:58:43.578166 env[1407]: time="2024-07-02T07:58:43.578093501Z" level=info msg="StopPodSandbox for \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" returns successfully" Jul 2 07:58:43.614630 kubelet[2483]: I0702 07:58:43.614582 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-lib-modules\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614652 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-net\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614679 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-kernel\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614699 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hostproc\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614732 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-run\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614766 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-config-path\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614803 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-etc-cni-netd\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.614857 kubelet[2483]: I0702 07:58:43.614833 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-ipsec-secrets\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.614873 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqlhd\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-kube-api-access-cqlhd\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.614899 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cni-path\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.614930 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-clustermesh-secrets\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.614969 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-xtables-lock\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.614995 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-cgroup\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.615036 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hubble-tls\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.615067 2483 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-bpf-maps\") pod \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\" (UID: \"1541cb55-6b28-4405-bc9d-b934ac4b0b57\") " Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.615157 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.615209 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615227 kubelet[2483]: I0702 07:58:43.615230 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615701 kubelet[2483]: I0702 07:58:43.615251 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615701 kubelet[2483]: I0702 07:58:43.615290 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hostproc" (OuterVolumeSpecName: "hostproc") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615701 kubelet[2483]: I0702 07:58:43.615312 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.615849 kubelet[2483]: I0702 07:58:43.615742 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cni-path" (OuterVolumeSpecName: "cni-path") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.616253 kubelet[2483]: I0702 07:58:43.615780 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.619187 kubelet[2483]: I0702 07:58:43.618820 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.619187 kubelet[2483]: I0702 07:58:43.618863 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:43.623981 kubelet[2483]: I0702 07:58:43.623935 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:43.627568 systemd[1]: var-lib-kubelet-pods-1541cb55\x2d6b28\x2d4405\x2dbc9d\x2db934ac4b0b57-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:43.632087 kubelet[2483]: I0702 07:58:43.630189 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:43.634876 systemd[1]: var-lib-kubelet-pods-1541cb55\x2d6b28\x2d4405\x2dbc9d\x2db934ac4b0b57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcqlhd.mount: Deactivated successfully. Jul 2 07:58:43.641626 kubelet[2483]: I0702 07:58:43.639635 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-kube-api-access-cqlhd" (OuterVolumeSpecName: "kube-api-access-cqlhd") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "kube-api-access-cqlhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:43.641779 kubelet[2483]: I0702 07:58:43.641720 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:43.643588 kubelet[2483]: I0702 07:58:43.642735 2483 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1541cb55-6b28-4405-bc9d-b934ac4b0b57" (UID: "1541cb55-6b28-4405-bc9d-b934ac4b0b57"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:43.715419 kubelet[2483]: I0702 07:58:43.715377 2483 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hostproc\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715419 kubelet[2483]: I0702 07:58:43.715419 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-net\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715442 2483 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-host-proc-sys-kernel\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715456 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-run\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715471 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-config-path\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715483 2483 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-etc-cni-netd\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715496 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-ipsec-secrets\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715510 2483 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cqlhd\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-kube-api-access-cqlhd\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715524 2483 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1541cb55-6b28-4405-bc9d-b934ac4b0b57-clustermesh-secrets\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715536 2483 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cni-path\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715570 2483 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-xtables-lock\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715584 2483 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-cilium-cgroup\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715597 2483 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1541cb55-6b28-4405-bc9d-b934ac4b0b57-hubble-tls\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715609 2483 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-bpf-maps\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:43.715685 kubelet[2483]: I0702 07:58:43.715621 2483 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1541cb55-6b28-4405-bc9d-b934ac4b0b57-lib-modules\") on node \"ci-3510.3.5-a-61dd50c322\" DevicePath \"\"" Jul 2 07:58:44.018374 systemd[1]: Removed slice kubepods-burstable-pod1541cb55_6b28_4405_bc9d_b934ac4b0b57.slice. Jul 2 07:58:44.209735 systemd[1]: var-lib-kubelet-pods-1541cb55\x2d6b28\x2d4405\x2dbc9d\x2db934ac4b0b57-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:44.209863 systemd[1]: var-lib-kubelet-pods-1541cb55\x2d6b28\x2d4405\x2dbc9d\x2db934ac4b0b57-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:44.508307 kubelet[2483]: I0702 07:58:44.508267 2483 scope.go:117] "RemoveContainer" containerID="99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135" Jul 2 07:58:44.510758 env[1407]: time="2024-07-02T07:58:44.510427772Z" level=info msg="RemoveContainer for \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\"" Jul 2 07:58:44.520330 env[1407]: time="2024-07-02T07:58:44.520291031Z" level=info msg="RemoveContainer for \"99a88480aee895972a5560497a377c71b3e06d83885e232ec58aa42bb469a135\" returns successfully" Jul 2 07:58:44.548357 kubelet[2483]: I0702 07:58:44.548309 2483 topology_manager.go:215] "Topology Admit Handler" podUID="da3acc8e-f0ac-4b05-a6af-f4a7594760ac" podNamespace="kube-system" podName="cilium-hhs8j" Jul 2 07:58:44.548737 kubelet[2483]: E0702 07:58:44.548719 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" containerName="mount-cgroup" Jul 2 07:58:44.548885 kubelet[2483]: E0702 07:58:44.548873 2483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" containerName="mount-cgroup" Jul 2 07:58:44.549444 kubelet[2483]: I0702 07:58:44.549416 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" containerName="mount-cgroup" Jul 2 07:58:44.549541 kubelet[2483]: I0702 07:58:44.549450 2483 memory_manager.go:346] "RemoveStaleState removing state" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" containerName="mount-cgroup" Jul 2 07:58:44.558281 kubelet[2483]: W0702 07:58:44.558262 2483 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.558448 kubelet[2483]: E0702 07:58:44.558434 2483 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.558596 kubelet[2483]: W0702 07:58:44.558446 2483 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.558699 kubelet[2483]: E0702 07:58:44.558691 2483 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.558843 kubelet[2483]: W0702 07:58:44.558493 2483 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.558937 kubelet[2483]: E0702 07:58:44.558929 2483 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.5-a-61dd50c322" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.5-a-61dd50c322' and this object Jul 2 07:58:44.567834 systemd[1]: Created slice kubepods-burstable-podda3acc8e_f0ac_4b05_a6af_f4a7594760ac.slice. Jul 2 07:58:44.666170 kubelet[2483]: W0702 07:58:44.666118 2483 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1541cb55_6b28_4405_bc9d_b934ac4b0b57.slice/cri-containerd-3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b.scope WatchSource:0}: container "3b59b547887bbcf2486b66164249ff8a08bd5a20c67cc6c2feb9fc3ad693e85b" in namespace "k8s.io": not found Jul 2 07:58:44.720777 kubelet[2483]: I0702 07:58:44.720725 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-host-proc-sys-net\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.720835 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-config-path\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.720871 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-ipsec-secrets\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.720904 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-bpf-maps\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.720932 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-clustermesh-secrets\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.720967 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cni-path\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721017 kubelet[2483]: I0702 07:58:44.721005 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-etc-cni-netd\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721035 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-xtables-lock\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721069 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-hubble-tls\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721107 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-lib-modules\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721148 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-hostproc\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721183 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-cgroup\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721222 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrql6\" (UniqueName: \"kubernetes.io/projected/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-kube-api-access-xrql6\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721261 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-run\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:44.721369 kubelet[2483]: I0702 07:58:44.721299 2483 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-host-proc-sys-kernel\") pod \"cilium-hhs8j\" (UID: \"da3acc8e-f0ac-4b05-a6af-f4a7594760ac\") " pod="kube-system/cilium-hhs8j" Jul 2 07:58:45.823760 kubelet[2483]: E0702 07:58:45.823700 2483 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 07:58:45.823760 kubelet[2483]: E0702 07:58:45.823750 2483 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-hhs8j: failed to sync secret cache: timed out waiting for the condition Jul 2 07:58:45.824420 kubelet[2483]: E0702 07:58:45.823845 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-hubble-tls podName:da3acc8e-f0ac-4b05-a6af-f4a7594760ac nodeName:}" failed. No retries permitted until 2024-07-02 07:58:46.323815941 +0000 UTC m=+190.398160773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-hubble-tls") pod "cilium-hhs8j" (UID: "da3acc8e-f0ac-4b05-a6af-f4a7594760ac") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:58:45.824420 kubelet[2483]: E0702 07:58:45.823696 2483 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Jul 2 07:58:45.824420 kubelet[2483]: E0702 07:58:45.824285 2483 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-ipsec-secrets podName:da3acc8e-f0ac-4b05-a6af-f4a7594760ac nodeName:}" failed. No retries permitted until 2024-07-02 07:58:46.324262043 +0000 UTC m=+190.398606875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/da3acc8e-f0ac-4b05-a6af-f4a7594760ac-cilium-ipsec-secrets") pod "cilium-hhs8j" (UID: "da3acc8e-f0ac-4b05-a6af-f4a7594760ac") : failed to sync secret cache: timed out waiting for the condition Jul 2 07:58:46.015990 kubelet[2483]: I0702 07:58:46.015941 2483 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1541cb55-6b28-4405-bc9d-b934ac4b0b57" path="/var/lib/kubelet/pods/1541cb55-6b28-4405-bc9d-b934ac4b0b57/volumes" Jul 2 07:58:46.135223 kubelet[2483]: E0702 07:58:46.135093 2483 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:46.373158 env[1407]: time="2024-07-02T07:58:46.373099620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhs8j,Uid:da3acc8e-f0ac-4b05-a6af-f4a7594760ac,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:46.411771 env[1407]: time="2024-07-02T07:58:46.411629033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:46.411771 env[1407]: time="2024-07-02T07:58:46.411675733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:46.412026 env[1407]: time="2024-07-02T07:58:46.411689033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:46.412388 env[1407]: time="2024-07-02T07:58:46.412316037Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea pid=4420 runtime=io.containerd.runc.v2 Jul 2 07:58:46.428049 systemd[1]: Started cri-containerd-c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea.scope. Jul 2 07:58:46.457155 env[1407]: time="2024-07-02T07:58:46.457108485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hhs8j,Uid:da3acc8e-f0ac-4b05-a6af-f4a7594760ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\"" Jul 2 07:58:46.462161 env[1407]: time="2024-07-02T07:58:46.462119612Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:46.494354 env[1407]: time="2024-07-02T07:58:46.494310290Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422\"" Jul 2 07:58:46.495083 env[1407]: time="2024-07-02T07:58:46.495050095Z" level=info msg="StartContainer for \"b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422\"" Jul 2 07:58:46.517006 systemd[1]: Started cri-containerd-b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422.scope. Jul 2 07:58:46.552440 env[1407]: time="2024-07-02T07:58:46.550110099Z" level=info msg="StartContainer for \"b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422\" returns successfully" Jul 2 07:58:46.557583 systemd[1]: cri-containerd-b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422.scope: Deactivated successfully. Jul 2 07:58:46.595211 env[1407]: time="2024-07-02T07:58:46.595146049Z" level=info msg="shim disconnected" id=b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422 Jul 2 07:58:46.595211 env[1407]: time="2024-07-02T07:58:46.595205149Z" level=warning msg="cleaning up after shim disconnected" id=b48be138540cabced85f5b0a8c09a38a3ca0db76c316eb693bdabc827ebac422 namespace=k8s.io Jul 2 07:58:46.595211 env[1407]: time="2024-07-02T07:58:46.595217749Z" level=info msg="cleaning up dead shim" Jul 2 07:58:46.603085 env[1407]: time="2024-07-02T07:58:46.603043892Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4504 runtime=io.containerd.runc.v2\n" Jul 2 07:58:47.525269 env[1407]: time="2024-07-02T07:58:47.525216481Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:58:47.560927 env[1407]: time="2024-07-02T07:58:47.560873771Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0\"" Jul 2 07:58:47.561696 env[1407]: time="2024-07-02T07:58:47.561655375Z" level=info msg="StartContainer for \"1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0\"" Jul 2 07:58:47.587913 systemd[1]: Started cri-containerd-1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0.scope. Jul 2 07:58:47.620878 env[1407]: time="2024-07-02T07:58:47.620825990Z" level=info msg="StartContainer for \"1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0\" returns successfully" Jul 2 07:58:47.623537 systemd[1]: cri-containerd-1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0.scope: Deactivated successfully. Jul 2 07:58:47.659171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0-rootfs.mount: Deactivated successfully. Jul 2 07:58:47.660647 env[1407]: time="2024-07-02T07:58:47.660582501Z" level=info msg="shim disconnected" id=1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0 Jul 2 07:58:47.660816 env[1407]: time="2024-07-02T07:58:47.660654902Z" level=warning msg="cleaning up after shim disconnected" id=1cb03ab206434fba3686f82a42bd2aee3134f84ba2c2eb3417e526c039e783f0 namespace=k8s.io Jul 2 07:58:47.660816 env[1407]: time="2024-07-02T07:58:47.660667702Z" level=info msg="cleaning up dead shim" Jul 2 07:58:47.672600 env[1407]: time="2024-07-02T07:58:47.672537965Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4570 runtime=io.containerd.runc.v2\n" Jul 2 07:58:48.526812 env[1407]: time="2024-07-02T07:58:48.526749894Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:58:48.582592 env[1407]: time="2024-07-02T07:58:48.582485678Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40\"" Jul 2 07:58:48.584599 env[1407]: time="2024-07-02T07:58:48.584533789Z" level=info msg="StartContainer for \"84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40\"" Jul 2 07:58:48.612323 systemd[1]: Started cri-containerd-84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40.scope. Jul 2 07:58:48.648055 systemd[1]: cri-containerd-84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40.scope: Deactivated successfully. Jul 2 07:58:48.649679 env[1407]: time="2024-07-02T07:58:48.649617321Z" level=info msg="StartContainer for \"84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40\" returns successfully" Jul 2 07:58:48.659220 systemd[1]: run-containerd-runc-k8s.io-84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40-runc.MCzr6p.mount: Deactivated successfully. Jul 2 07:58:48.677484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40-rootfs.mount: Deactivated successfully. Jul 2 07:58:48.693041 env[1407]: time="2024-07-02T07:58:48.692978142Z" level=info msg="shim disconnected" id=84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40 Jul 2 07:58:48.693041 env[1407]: time="2024-07-02T07:58:48.693043242Z" level=warning msg="cleaning up after shim disconnected" id=84dd7032e6b88d93c49544f4dedd7c4d7c0ccbd47afdd6da2478ff56e0dedc40 namespace=k8s.io Jul 2 07:58:48.693041 env[1407]: time="2024-07-02T07:58:48.693055142Z" level=info msg="cleaning up dead shim" Jul 2 07:58:48.709642 env[1407]: time="2024-07-02T07:58:48.709570127Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4627 runtime=io.containerd.runc.v2\n" Jul 2 07:58:49.536971 env[1407]: time="2024-07-02T07:58:49.534027421Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:58:49.576848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575374306.mount: Deactivated successfully. Jul 2 07:58:49.590843 env[1407]: time="2024-07-02T07:58:49.590792599Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3\"" Jul 2 07:58:49.591499 env[1407]: time="2024-07-02T07:58:49.591462402Z" level=info msg="StartContainer for \"b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3\"" Jul 2 07:58:49.609792 systemd[1]: Started cri-containerd-b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3.scope. Jul 2 07:58:49.638980 systemd[1]: cri-containerd-b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3.scope: Deactivated successfully. Jul 2 07:58:49.645121 env[1407]: time="2024-07-02T07:58:49.645067265Z" level=info msg="StartContainer for \"b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3\" returns successfully" Jul 2 07:58:49.675377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3-rootfs.mount: Deactivated successfully. Jul 2 07:58:49.688609 env[1407]: time="2024-07-02T07:58:49.688536777Z" level=info msg="shim disconnected" id=b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3 Jul 2 07:58:49.688959 env[1407]: time="2024-07-02T07:58:49.688610378Z" level=warning msg="cleaning up after shim disconnected" id=b5e4ac614c6094bd72911a4b452a7b0614faebaa13fdaa43a8a4551b8af776f3 namespace=k8s.io Jul 2 07:58:49.688959 env[1407]: time="2024-07-02T07:58:49.688623578Z" level=info msg="cleaning up dead shim" Jul 2 07:58:49.696836 env[1407]: time="2024-07-02T07:58:49.696795718Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4683 runtime=io.containerd.runc.v2\n" Jul 2 07:58:50.537383 env[1407]: time="2024-07-02T07:58:50.537329218Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:58:50.576523 env[1407]: time="2024-07-02T07:58:50.576472502Z" level=info msg="CreateContainer within sandbox \"c06a7e635b27281908fac940c7b7420f9d7e538ed0589127dfa42d58e0d2e1ea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547\"" Jul 2 07:58:50.577141 env[1407]: time="2024-07-02T07:58:50.577103205Z" level=info msg="StartContainer for \"110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547\"" Jul 2 07:58:50.600029 systemd[1]: Started cri-containerd-110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547.scope. Jul 2 07:58:50.631017 kubelet[2483]: I0702 07:58:50.630960 2483 setters.go:552] "Node became not ready" node="ci-3510.3.5-a-61dd50c322" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:58:50Z","lastTransitionTime":"2024-07-02T07:58:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:58:50.647757 env[1407]: time="2024-07-02T07:58:50.647699135Z" level=info msg="StartContainer for \"110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547\" returns successfully" Jul 2 07:58:51.013300 kubelet[2483]: E0702 07:58:51.013189 2483 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-rrpgn" podUID="c4fa4abd-0ab3-4d02-a7cc-92dd3bed288f" Jul 2 07:58:51.249595 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:58:51.558270 kubelet[2483]: I0702 07:58:51.558224 2483 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hhs8j" podStartSLOduration=7.558165087 podCreationTimestamp="2024-07-02 07:58:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:58:51.557873586 +0000 UTC m=+195.632218418" watchObservedRunningTime="2024-07-02 07:58:51.558165087 +0000 UTC m=+195.632509919" Jul 2 07:58:53.904949 systemd-networkd[1558]: lxc_health: Link UP Jul 2 07:58:53.916827 systemd-networkd[1558]: lxc_health: Gained carrier Jul 2 07:58:53.917679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:58:55.172944 systemd-networkd[1558]: lxc_health: Gained IPv6LL Jul 2 07:58:59.409105 systemd[1]: run-containerd-runc-k8s.io-110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547-runc.v5TeuK.mount: Deactivated successfully. Jul 2 07:59:01.556103 systemd[1]: run-containerd-runc-k8s.io-110929b17d98c36b422a49efc84e50cf989815806885c5038ecfe0b6ec780547-runc.8Tqtiu.mount: Deactivated successfully. Jul 2 07:59:01.723980 sshd[4323]: pam_unix(sshd:session): session closed for user core Jul 2 07:59:01.727731 systemd[1]: sshd@26-10.200.8.39:22-10.200.16.10:48540.service: Deactivated successfully. Jul 2 07:59:01.728716 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 07:59:01.729423 systemd-logind[1395]: Session 29 logged out. Waiting for processes to exit. Jul 2 07:59:01.730344 systemd-logind[1395]: Removed session 29. Jul 2 07:59:16.548157 systemd[1]: cri-containerd-077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501.scope: Deactivated successfully. Jul 2 07:59:16.548508 systemd[1]: cri-containerd-077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501.scope: Consumed 4.138s CPU time. Jul 2 07:59:16.570606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501-rootfs.mount: Deactivated successfully. Jul 2 07:59:16.611811 env[1407]: time="2024-07-02T07:59:16.611746755Z" level=info msg="shim disconnected" id=077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501 Jul 2 07:59:16.611811 env[1407]: time="2024-07-02T07:59:16.611810855Z" level=warning msg="cleaning up after shim disconnected" id=077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501 namespace=k8s.io Jul 2 07:59:16.611811 env[1407]: time="2024-07-02T07:59:16.611824255Z" level=info msg="cleaning up dead shim" Jul 2 07:59:16.620302 env[1407]: time="2024-07-02T07:59:16.620245057Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:59:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5389 runtime=io.containerd.runc.v2\n" Jul 2 07:59:17.602779 kubelet[2483]: I0702 07:59:17.602741 2483 scope.go:117] "RemoveContainer" containerID="077997c3793bdcc4c1901bf724101866fe83c54f076d39581bf9842b75d6a501" Jul 2 07:59:17.605697 env[1407]: time="2024-07-02T07:59:17.605642767Z" level=info msg="CreateContainer within sandbox \"2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 07:59:17.631294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120081557.mount: Deactivated successfully. Jul 2 07:59:17.649086 env[1407]: time="2024-07-02T07:59:17.649044374Z" level=info msg="CreateContainer within sandbox \"2a3301ad8a1ae5ce38d1abfd37dd592db1327c306d9eabfefce5766193299ddf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"052e6b2da1968f1c0cfba027cfe4c64fa2fa8dc07ff52d585a8f57799d938ada\"" Jul 2 07:59:17.649588 env[1407]: time="2024-07-02T07:59:17.649533174Z" level=info msg="StartContainer for \"052e6b2da1968f1c0cfba027cfe4c64fa2fa8dc07ff52d585a8f57799d938ada\"" Jul 2 07:59:17.673572 systemd[1]: Started cri-containerd-052e6b2da1968f1c0cfba027cfe4c64fa2fa8dc07ff52d585a8f57799d938ada.scope. Jul 2 07:59:17.729595 env[1407]: time="2024-07-02T07:59:17.728394387Z" level=info msg="StartContainer for \"052e6b2da1968f1c0cfba027cfe4c64fa2fa8dc07ff52d585a8f57799d938ada\" returns successfully" Jul 2 07:59:18.729143 kubelet[2483]: E0702 07:59:18.729088 2483 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:36456->10.200.8.14:2379: read: connection timed out" Jul 2 07:59:18.734516 systemd[1]: cri-containerd-1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91.scope: Deactivated successfully. Jul 2 07:59:18.734876 systemd[1]: cri-containerd-1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91.scope: Consumed 1.845s CPU time. Jul 2 07:59:18.757137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91-rootfs.mount: Deactivated successfully. Jul 2 07:59:18.779121 env[1407]: time="2024-07-02T07:59:18.779054652Z" level=info msg="shim disconnected" id=1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91 Jul 2 07:59:18.779121 env[1407]: time="2024-07-02T07:59:18.779121152Z" level=warning msg="cleaning up after shim disconnected" id=1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91 namespace=k8s.io Jul 2 07:59:18.779735 env[1407]: time="2024-07-02T07:59:18.779134152Z" level=info msg="cleaning up dead shim" Jul 2 07:59:18.787393 env[1407]: time="2024-07-02T07:59:18.787352052Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:59:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5453 runtime=io.containerd.runc.v2\n" Jul 2 07:59:19.610294 kubelet[2483]: I0702 07:59:19.610255 2483 scope.go:117] "RemoveContainer" containerID="1c75e8ed6dbcdf90ccc2d5c522e3effce7bbce742bb50bce60676f664fb86a91" Jul 2 07:59:19.612589 env[1407]: time="2024-07-02T07:59:19.612522794Z" level=info msg="CreateContainer within sandbox \"a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 07:59:19.668073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1027695777.mount: Deactivated successfully. Jul 2 07:59:19.683055 env[1407]: time="2024-07-02T07:59:19.683002687Z" level=info msg="CreateContainer within sandbox \"a29029659b0d807ba18a269daba1cb9c97219fc8216cf10dcd828677de6bb913\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c2e082452ab3c91686f68150f402c7963328135cdc8c08354dc1d4e5c3810474\"" Jul 2 07:59:19.683712 env[1407]: time="2024-07-02T07:59:19.683677487Z" level=info msg="StartContainer for \"c2e082452ab3c91686f68150f402c7963328135cdc8c08354dc1d4e5c3810474\"" Jul 2 07:59:19.705268 systemd[1]: Started cri-containerd-c2e082452ab3c91686f68150f402c7963328135cdc8c08354dc1d4e5c3810474.scope. Jul 2 07:59:19.759910 env[1407]: time="2024-07-02T07:59:19.759849179Z" level=info msg="StartContainer for \"c2e082452ab3c91686f68150f402c7963328135cdc8c08354dc1d4e5c3810474\" returns successfully" Jul 2 07:59:21.157644 kubelet[2483]: E0702 07:59:21.157500 2483 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.5-a-61dd50c322.17de5670ea8f22a2", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.5-a-61dd50c322", UID:"41212e9ae6af1b95f30138e9b506cc1e", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.5-a-61dd50c322"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 59, 10, 673683106, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 59, 10, 673683106, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.5-a-61dd50c322"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:36252->10.200.8.14:2379: read: connection timed out' (will not retry!) Jul 2 07:59:28.070087 kubelet[2483]: I0702 07:59:28.070028 2483 status_manager.go:853] "Failed to get status for pod" podUID="d3c6599c950bf86a73161e97e720d54c" pod="kube-system/kube-controller-manager-ci-3510.3.5-a-61dd50c322" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.39:36366->10.200.8.14:2379: read: connection timed out" Jul 2 07:59:28.729955 kubelet[2483]: E0702 07:59:28.729904 2483 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 07:59:36.046154 env[1407]: time="2024-07-02T07:59:36.046079316Z" level=info msg="StopPodSandbox for \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\"" Jul 2 07:59:36.046713 env[1407]: time="2024-07-02T07:59:36.046189916Z" level=info msg="TearDown network for sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" successfully" Jul 2 07:59:36.046713 env[1407]: time="2024-07-02T07:59:36.046234616Z" level=info msg="StopPodSandbox for \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" returns successfully" Jul 2 07:59:36.047026 env[1407]: time="2024-07-02T07:59:36.046991314Z" level=info msg="RemovePodSandbox for \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\"" Jul 2 07:59:36.047147 env[1407]: time="2024-07-02T07:59:36.047028714Z" level=info msg="Forcibly stopping sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\"" Jul 2 07:59:36.047147 env[1407]: time="2024-07-02T07:59:36.047125414Z" level=info msg="TearDown network for sandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" successfully" Jul 2 07:59:36.056805 env[1407]: time="2024-07-02T07:59:36.056657494Z" level=info msg="RemovePodSandbox \"ef3c9cf22d1b2bfe9313e9e1c8f106a3fc238ee5af60773bcb21260f391bc939\" returns successfully" Jul 2 07:59:36.057359 env[1407]: time="2024-07-02T07:59:36.057321393Z" level=info msg="StopPodSandbox for \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\"" Jul 2 07:59:36.057483 env[1407]: time="2024-07-02T07:59:36.057427293Z" level=info msg="TearDown network for sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" successfully" Jul 2 07:59:36.057483 env[1407]: time="2024-07-02T07:59:36.057474293Z" level=info msg="StopPodSandbox for \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" returns successfully" Jul 2 07:59:36.057939 env[1407]: time="2024-07-02T07:59:36.057904992Z" level=info msg="RemovePodSandbox for \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\"" Jul 2 07:59:36.058045 env[1407]: time="2024-07-02T07:59:36.057944192Z" level=info msg="Forcibly stopping sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\"" Jul 2 07:59:36.058045 env[1407]: time="2024-07-02T07:59:36.058031991Z" level=info msg="TearDown network for sandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" successfully" Jul 2 07:59:36.069504 env[1407]: time="2024-07-02T07:59:36.069462268Z" level=info msg="RemovePodSandbox \"a25643010d576c833434f4f56019fc563a38ab594356424ec09a08427da695ba\" returns successfully" Jul 2 07:59:36.069930 env[1407]: time="2024-07-02T07:59:36.069900467Z" level=info msg="StopPodSandbox for \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\"" Jul 2 07:59:36.070043 env[1407]: time="2024-07-02T07:59:36.069985067Z" level=info msg="TearDown network for sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" successfully" Jul 2 07:59:36.070043 env[1407]: time="2024-07-02T07:59:36.070028367Z" level=info msg="StopPodSandbox for \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" returns successfully" Jul 2 07:59:36.070407 env[1407]: time="2024-07-02T07:59:36.070373766Z" level=info msg="RemovePodSandbox for \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\"" Jul 2 07:59:36.070507 env[1407]: time="2024-07-02T07:59:36.070411466Z" level=info msg="Forcibly stopping sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\"" Jul 2 07:59:36.070587 env[1407]: time="2024-07-02T07:59:36.070500766Z" level=info msg="TearDown network for sandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" successfully" Jul 2 07:59:36.078995 env[1407]: time="2024-07-02T07:59:36.078956349Z" level=info msg="RemovePodSandbox \"768165220cf3122cdd34456e80eb1ad0e665d02d79a312c1971253e32e5aa5cb\" returns successfully" Jul 2 07:59:38.730810 kubelet[2483]: E0702 07:59:38.730756 2483 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.5-a-61dd50c322?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"