Feb 9 19:36:13.056892 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:36:13.056924 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:36:13.056938 kernel: BIOS-provided physical RAM map: Feb 9 19:36:13.056949 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:36:13.056959 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:36:13.056969 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:36:13.056984 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:36:13.056995 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:36:13.057005 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:36:13.057016 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:36:13.057027 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:36:13.057037 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:36:13.057048 kernel: NX (Execute Disable) protection: active Feb 9 19:36:13.057059 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:36:13.057075 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Feb 9 19:36:13.057088 kernel: random: crng init done Feb 9 19:36:13.057099 kernel: SMBIOS 3.1.0 present. Feb 9 19:36:13.057111 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:36:13.057123 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:36:13.057134 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:36:13.057146 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:36:13.057157 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:36:13.057170 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:36:13.057182 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:36:13.057194 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:36:13.057206 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:36:13.057218 kernel: tsc: Detected 2593.905 MHz processor Feb 9 19:36:13.057229 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:36:13.057242 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:36:13.057253 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:36:13.057265 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:36:13.057278 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:36:13.057291 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:36:13.057304 kernel: Using GB pages for direct mapping Feb 9 19:36:13.057315 kernel: Secure boot disabled Feb 9 19:36:13.057327 kernel: ACPI: Early table checksum verification disabled Feb 9 19:36:13.057339 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:36:13.057351 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058861 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058879 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:36:13.058902 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:36:13.058915 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058928 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058941 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058954 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058967 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058982 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.058995 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:36:13.059007 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:36:13.059020 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:36:13.059033 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:36:13.059046 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:36:13.059058 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:36:13.059071 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:36:13.059087 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:36:13.059099 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:36:13.059112 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:36:13.059125 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:36:13.059138 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:36:13.059151 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:36:13.059163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:36:13.059176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:36:13.059189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:36:13.059204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:36:13.059217 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:36:13.059230 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:36:13.059243 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:36:13.059256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:36:13.059269 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:36:13.059282 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:36:13.059294 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:36:13.059307 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:36:13.059323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:36:13.059336 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:36:13.059348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:36:13.059371 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:36:13.059385 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:36:13.059398 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:36:13.059411 kernel: Zone ranges: Feb 9 19:36:13.059424 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:36:13.059437 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:36:13.059453 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:36:13.059466 kernel: Movable zone start for each node Feb 9 19:36:13.059478 kernel: Early memory node ranges Feb 9 19:36:13.059491 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:36:13.059504 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:36:13.059517 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:36:13.059529 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:36:13.059542 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:36:13.059555 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:36:13.059570 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:36:13.059583 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:36:13.059596 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:36:13.059608 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:36:13.059621 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:36:13.059634 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:36:13.059647 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:36:13.059659 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:36:13.059672 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:36:13.059687 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:36:13.059700 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:36:13.059713 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:36:13.059726 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:36:13.059739 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:36:13.059752 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:36:13.059764 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:36:13.059776 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:36:13.059789 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:36:13.059804 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:36:13.059818 kernel: Policy zone: Normal Feb 9 19:36:13.059832 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:36:13.059846 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:36:13.059859 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:36:13.059872 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:36:13.059885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:36:13.059898 kernel: Memory: 8073736K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 313464K reserved, 0K cma-reserved) Feb 9 19:36:13.059914 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:36:13.059927 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:36:13.059949 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:36:13.059965 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:36:13.059979 kernel: rcu: RCU event tracing is enabled. Feb 9 19:36:13.059993 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:36:13.060006 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:36:13.060020 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:36:13.060034 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:36:13.060047 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:36:13.060061 kernel: Using NULL legacy PIC Feb 9 19:36:13.060077 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:36:13.060090 kernel: Console: colour dummy device 80x25 Feb 9 19:36:13.060104 kernel: printk: console [tty1] enabled Feb 9 19:36:13.060117 kernel: printk: console [ttyS0] enabled Feb 9 19:36:13.060131 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:36:13.060147 kernel: ACPI: Core revision 20210730 Feb 9 19:36:13.060160 kernel: Failed to register legacy timer interrupt Feb 9 19:36:13.060174 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:36:13.060188 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:36:13.060201 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 9 19:36:13.060214 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:36:13.060229 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:36:13.060242 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:36:13.060256 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:36:13.060269 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:36:13.060285 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:36:13.060298 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:36:13.060312 kernel: RETBleed: Vulnerable Feb 9 19:36:13.060326 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:36:13.060339 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:36:13.060352 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:36:13.060373 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:36:13.060387 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:36:13.060400 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:36:13.060414 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:36:13.060430 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:36:13.060443 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:36:13.060456 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:36:13.060469 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:36:13.060483 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:36:13.060495 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:36:13.060509 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:36:13.060522 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:36:13.060535 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:36:13.060548 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:36:13.060561 kernel: LSM: Security Framework initializing Feb 9 19:36:13.060574 kernel: SELinux: Initializing. Feb 9 19:36:13.060590 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:36:13.060603 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:36:13.060616 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:36:13.060630 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:36:13.060643 kernel: signal: max sigframe size: 3632 Feb 9 19:36:13.060657 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:36:13.060670 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:36:13.060684 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:36:13.060697 kernel: x86: Booting SMP configuration: Feb 9 19:36:13.060710 kernel: .... node #0, CPUs: #1 Feb 9 19:36:13.060727 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:36:13.060742 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:36:13.060756 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:36:13.060769 kernel: smpboot: Max logical packages: 1 Feb 9 19:36:13.060783 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:36:13.060797 kernel: devtmpfs: initialized Feb 9 19:36:13.060810 kernel: x86/mm: Memory block size: 128MB Feb 9 19:36:13.060824 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:36:13.060840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:36:13.060854 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:36:13.060867 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:36:13.060881 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:36:13.060894 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:36:13.060908 kernel: audit: type=2000 audit(1707507371.024:1): state=initialized audit_enabled=0 res=1 Feb 9 19:36:13.060921 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:36:13.060934 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:36:13.060948 kernel: cpuidle: using governor menu Feb 9 19:36:13.060964 kernel: ACPI: bus type PCI registered Feb 9 19:36:13.060978 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:36:13.060991 kernel: dca service started, version 1.12.1 Feb 9 19:36:13.061005 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:36:13.061018 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:36:13.061032 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:36:13.061045 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:36:13.061060 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:36:13.061073 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:36:13.061089 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:36:13.061102 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:36:13.061116 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:36:13.061129 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:36:13.061143 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:36:13.061156 kernel: ACPI: Interpreter enabled Feb 9 19:36:13.061170 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:36:13.061183 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:36:13.061197 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:36:13.061213 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:36:13.061226 kernel: iommu: Default domain type: Translated Feb 9 19:36:13.061240 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:36:13.061253 kernel: vgaarb: loaded Feb 9 19:36:13.061266 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:36:13.061280 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:36:13.061294 kernel: PTP clock support registered Feb 9 19:36:13.061308 kernel: Registered efivars operations Feb 9 19:36:13.061321 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:36:13.061334 kernel: PCI: System does not support PCI Feb 9 19:36:13.061350 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:36:13.061371 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:36:13.061385 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:36:13.061398 kernel: pnp: PnP ACPI init Feb 9 19:36:13.061412 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:36:13.061425 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:36:13.061439 kernel: NET: Registered PF_INET protocol family Feb 9 19:36:13.061453 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:36:13.061470 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:36:13.061483 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:36:13.061497 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:36:13.061511 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:36:13.061525 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:36:13.061538 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:36:13.061552 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:36:13.061566 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:36:13.061579 kernel: NET: Registered PF_XDP protocol family Feb 9 19:36:13.061595 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:36:13.061609 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:36:13.061623 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:36:13.061637 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:36:13.061651 kernel: Initialise system trusted keyrings Feb 9 19:36:13.061664 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:36:13.061677 kernel: Key type asymmetric registered Feb 9 19:36:13.061690 kernel: Asymmetric key parser 'x509' registered Feb 9 19:36:13.061703 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:36:13.061719 kernel: io scheduler mq-deadline registered Feb 9 19:36:13.061733 kernel: io scheduler kyber registered Feb 9 19:36:13.061746 kernel: io scheduler bfq registered Feb 9 19:36:13.061760 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:36:13.061774 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:36:13.061787 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:36:13.061801 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:36:13.061814 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:36:13.061976 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:36:13.062091 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:36:12 UTC (1707507372) Feb 9 19:36:13.062200 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:36:13.062217 kernel: fail to initialize ptp_kvm Feb 9 19:36:13.062231 kernel: intel_pstate: CPU model not supported Feb 9 19:36:13.062245 kernel: efifb: probing for efifb Feb 9 19:36:13.062259 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:36:13.062272 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:36:13.062286 kernel: efifb: scrolling: redraw Feb 9 19:36:13.062302 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:36:13.062316 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:36:13.062329 kernel: fb0: EFI VGA frame buffer device Feb 9 19:36:13.062343 kernel: pstore: Registered efi as persistent store backend Feb 9 19:36:13.062369 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:36:13.062381 kernel: Segment Routing with IPv6 Feb 9 19:36:13.062391 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:36:13.062401 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:36:13.062413 kernel: Key type dns_resolver registered Feb 9 19:36:13.062428 kernel: IPI shorthand broadcast: enabled Feb 9 19:36:13.062440 kernel: sched_clock: Marking stable (819522300, 23524600)->(1062350000, -219303100) Feb 9 19:36:13.062451 kernel: registered taskstats version 1 Feb 9 19:36:13.062462 kernel: Loading compiled-in X.509 certificates Feb 9 19:36:13.062473 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:36:13.062483 kernel: Key type .fscrypt registered Feb 9 19:36:13.062490 kernel: Key type fscrypt-provisioning registered Feb 9 19:36:13.062498 kernel: pstore: Using crash dump compression: deflate Feb 9 19:36:13.062511 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:36:13.062518 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:36:13.062525 kernel: ima: No architecture policies found Feb 9 19:36:13.062536 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:36:13.062545 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:36:13.062553 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:36:13.062561 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:36:13.062568 kernel: Run /init as init process Feb 9 19:36:13.062575 kernel: with arguments: Feb 9 19:36:13.062585 kernel: /init Feb 9 19:36:13.062595 kernel: with environment: Feb 9 19:36:13.062604 kernel: HOME=/ Feb 9 19:36:13.062611 kernel: TERM=linux Feb 9 19:36:13.062618 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:36:13.062631 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:36:13.062640 systemd[1]: Detected virtualization microsoft. Feb 9 19:36:13.062648 systemd[1]: Detected architecture x86-64. Feb 9 19:36:13.062657 systemd[1]: Running in initrd. Feb 9 19:36:13.062665 systemd[1]: No hostname configured, using default hostname. Feb 9 19:36:13.062672 systemd[1]: Hostname set to . Feb 9 19:36:13.062680 systemd[1]: Initializing machine ID from random generator. Feb 9 19:36:13.062688 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:36:13.062698 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:36:13.062705 systemd[1]: Reached target cryptsetup.target. Feb 9 19:36:13.062712 systemd[1]: Reached target paths.target. Feb 9 19:36:13.062720 systemd[1]: Reached target slices.target. Feb 9 19:36:13.062729 systemd[1]: Reached target swap.target. Feb 9 19:36:13.062739 systemd[1]: Reached target timers.target. Feb 9 19:36:13.062748 systemd[1]: Listening on iscsid.socket. Feb 9 19:36:13.062755 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:36:13.062762 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:36:13.062770 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:36:13.062778 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:36:13.062787 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:36:13.062795 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:36:13.062802 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:36:13.062810 systemd[1]: Reached target sockets.target. Feb 9 19:36:13.062817 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:36:13.062824 systemd[1]: Finished network-cleanup.service. Feb 9 19:36:13.062832 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:36:13.062839 systemd[1]: Starting systemd-journald.service... Feb 9 19:36:13.062847 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:36:13.062856 systemd[1]: Starting systemd-resolved.service... Feb 9 19:36:13.062864 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:36:13.062871 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:36:13.062879 kernel: audit: type=1130 audit(1707507373.062:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.062891 systemd-journald[183]: Journal started Feb 9 19:36:13.062939 systemd-journald[183]: Runtime Journal (/run/log/journal/594eb27b221c425594875610fe94cab6) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:36:13.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.043404 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:36:13.100716 systemd[1]: Started systemd-journald.service. Feb 9 19:36:13.091071 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:36:13.093418 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:36:13.112376 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:36:13.096830 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:36:13.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.125795 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:36:13.128919 kernel: audit: type=1130 audit(1707507373.090:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.143150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:36:13.165732 kernel: audit: type=1130 audit(1707507373.092:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.165763 kernel: audit: type=1130 audit(1707507373.095:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.165779 kernel: Bridge firewalling registered Feb 9 19:36:13.095000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.166036 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:36:13.169388 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:36:13.213626 kernel: audit: type=1130 audit(1707507373.165:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.213661 kernel: audit: type=1130 audit(1707507373.168:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.185100 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:36:13.216155 dracut-cmdline[200]: dracut-dracut-053 Feb 9 19:36:13.216155 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Feb 9 19:36:13.216155 dracut-cmdline[200]: BEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:36:13.243588 kernel: SCSI subsystem initialized Feb 9 19:36:13.243619 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:36:13.192838 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:36:13.250053 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:36:13.192856 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:36:13.260136 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:36:13.192905 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:36:13.196581 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:36:13.245048 systemd[1]: Started systemd-resolved.service. Feb 9 19:36:13.276434 systemd[1]: Reached target nss-lookup.target. Feb 9 19:36:13.296508 kernel: audit: type=1130 audit(1707507373.260:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.296717 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:36:13.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.300001 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:36:13.317872 kernel: audit: type=1130 audit(1707507373.302:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.314652 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:36:13.327013 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:36:13.342486 kernel: audit: type=1130 audit(1707507373.328:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.359383 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:36:13.372382 kernel: iscsi: registered transport (tcp) Feb 9 19:36:13.397302 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:36:13.397383 kernel: QLogic iSCSI HBA Driver Feb 9 19:36:13.426920 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:36:13.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.429724 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:36:13.481393 kernel: raid6: avx512x4 gen() 18011 MB/s Feb 9 19:36:13.501375 kernel: raid6: avx512x4 xor() 7390 MB/s Feb 9 19:36:13.520374 kernel: raid6: avx512x2 gen() 18190 MB/s Feb 9 19:36:13.540377 kernel: raid6: avx512x2 xor() 29129 MB/s Feb 9 19:36:13.560390 kernel: raid6: avx512x1 gen() 18019 MB/s Feb 9 19:36:13.580370 kernel: raid6: avx512x1 xor() 26507 MB/s Feb 9 19:36:13.600378 kernel: raid6: avx2x4 gen() 18164 MB/s Feb 9 19:36:13.620371 kernel: raid6: avx2x4 xor() 6663 MB/s Feb 9 19:36:13.640370 kernel: raid6: avx2x2 gen() 18224 MB/s Feb 9 19:36:13.660375 kernel: raid6: avx2x2 xor() 21986 MB/s Feb 9 19:36:13.680369 kernel: raid6: avx2x1 gen() 13602 MB/s Feb 9 19:36:13.700371 kernel: raid6: avx2x1 xor() 19501 MB/s Feb 9 19:36:13.720372 kernel: raid6: sse2x4 gen() 11729 MB/s Feb 9 19:36:13.740371 kernel: raid6: sse2x4 xor() 6085 MB/s Feb 9 19:36:13.760371 kernel: raid6: sse2x2 gen() 12716 MB/s Feb 9 19:36:13.780371 kernel: raid6: sse2x2 xor() 7411 MB/s Feb 9 19:36:13.800371 kernel: raid6: sse2x1 gen() 11447 MB/s Feb 9 19:36:13.823623 kernel: raid6: sse2x1 xor() 5930 MB/s Feb 9 19:36:13.823640 kernel: raid6: using algorithm avx2x2 gen() 18224 MB/s Feb 9 19:36:13.823650 kernel: raid6: .... xor() 21986 MB/s, rmw enabled Feb 9 19:36:13.827403 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:36:13.847387 kernel: xor: automatically using best checksumming function avx Feb 9 19:36:13.946389 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:36:13.955206 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:36:13.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.959000 audit: BPF prog-id=7 op=LOAD Feb 9 19:36:13.959000 audit: BPF prog-id=8 op=LOAD Feb 9 19:36:13.960316 systemd[1]: Starting systemd-udevd.service... Feb 9 19:36:13.975176 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 19:36:13.979958 systemd[1]: Started systemd-udevd.service. Feb 9 19:36:13.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:13.984436 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:36:13.999822 dracut-pre-trigger[387]: rd.md=0: removing MD RAID activation Feb 9 19:36:14.035429 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:36:14.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:14.037499 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:36:14.074721 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:36:14.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:14.122382 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:36:14.139384 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:36:14.175387 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:36:14.192382 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:36:14.204881 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:36:14.204938 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:36:14.204961 kernel: AES CTR mode by8 optimization enabled Feb 9 19:36:14.211774 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:36:14.212380 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:36:14.221379 kernel: scsi host0: storvsc_host_t Feb 9 19:36:14.221588 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:36:14.228706 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:36:14.228753 kernel: scsi host1: storvsc_host_t Feb 9 19:36:14.231657 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:36:14.241348 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:36:14.250389 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:36:14.286086 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:36:14.286348 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:36:14.286486 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:36:14.295071 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:36:14.295430 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:36:14.300378 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:36:14.305641 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:36:14.315970 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:36:14.316179 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:36:14.317387 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:36:14.420374 kernel: hv_netvsc 000d3ab9-11fe-000d-3ab9-11fe000d3ab9 eth0: VF slot 1 added Feb 9 19:36:14.430378 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:36:14.430430 kernel: hv_pci 6fa0f890-adef-433a-b15a-67aaf5f48c14: PCI VMBus probing: Using version 0x10004 Feb 9 19:36:14.450014 kernel: hv_pci 6fa0f890-adef-433a-b15a-67aaf5f48c14: PCI host bridge to bus adef:00 Feb 9 19:36:14.450274 kernel: pci_bus adef:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:36:14.450418 kernel: pci_bus adef:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:36:14.461431 kernel: pci adef:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:36:14.472429 kernel: pci adef:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:36:14.490547 kernel: pci adef:00:02.0: enabling Extended Tags Feb 9 19:36:14.507588 kernel: pci adef:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at adef:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:36:14.516818 kernel: pci_bus adef:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:36:14.517019 kernel: pci adef:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:36:14.611389 kernel: mlx5_core adef:00:02.0: firmware version: 14.30.1350 Feb 9 19:36:14.641865 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:36:14.718383 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (446) Feb 9 19:36:14.743806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:36:14.775432 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:36:14.780833 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:36:14.810115 systemd[1]: Starting disk-uuid.service... Feb 9 19:36:14.827916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:36:14.833281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:36:14.868382 kernel: mlx5_core adef:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:36:15.065876 kernel: mlx5_core adef:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:36:15.066093 kernel: mlx5_core adef:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Feb 9 19:36:15.075382 kernel: hv_netvsc 000d3ab9-11fe-000d-3ab9-11fe000d3ab9 eth0: VF registering: eth1 Feb 9 19:36:15.075588 kernel: mlx5_core adef:00:02.0 eth1: joined to eth0 Feb 9 19:36:15.092380 kernel: mlx5_core adef:00:02.0 enP44527s1: renamed from eth1 Feb 9 19:36:15.850382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:36:15.852062 disk-uuid[554]: The operation has completed successfully. Feb 9 19:36:15.938833 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:36:15.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:15.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:15.938952 systemd[1]: Finished disk-uuid.service. Feb 9 19:36:15.946021 systemd[1]: Starting verity-setup.service... Feb 9 19:36:15.979382 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:36:16.155819 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:36:16.160395 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:36:16.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.162602 systemd[1]: Finished verity-setup.service. Feb 9 19:36:16.236408 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:36:16.236134 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:36:16.239952 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:36:16.244310 systemd[1]: Starting ignition-setup.service... Feb 9 19:36:16.247349 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:36:16.273482 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:36:16.273568 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:36:16.273589 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:36:16.318676 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:36:16.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.323000 audit: BPF prog-id=9 op=LOAD Feb 9 19:36:16.324808 systemd[1]: Starting systemd-networkd.service... Feb 9 19:36:16.349995 systemd-networkd[828]: lo: Link UP Feb 9 19:36:16.352334 systemd-networkd[828]: lo: Gained carrier Feb 9 19:36:16.353435 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:36:16.355042 systemd-networkd[828]: Enumeration completed Feb 9 19:36:16.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.355631 systemd[1]: Started systemd-networkd.service. Feb 9 19:36:16.358499 systemd-networkd[828]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:36:16.359840 systemd[1]: Reached target network.target. Feb 9 19:36:16.365838 systemd[1]: Starting iscsiuio.service... Feb 9 19:36:16.375842 systemd[1]: Started iscsiuio.service. Feb 9 19:36:16.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.380548 systemd[1]: Starting iscsid.service... Feb 9 19:36:16.387091 iscsid[837]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:36:16.387091 iscsid[837]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:36:16.387091 iscsid[837]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:36:16.387091 iscsid[837]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:36:16.387091 iscsid[837]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:36:16.387091 iscsid[837]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:36:16.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.387135 systemd[1]: Started iscsid.service. Feb 9 19:36:16.438457 kernel: mlx5_core adef:00:02.0 enP44527s1: Link up Feb 9 19:36:16.389858 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:36:16.415918 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:36:16.418915 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:36:16.431015 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:36:16.435023 systemd[1]: Reached target remote-fs.target. Feb 9 19:36:16.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.451814 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:36:16.454400 systemd[1]: Finished ignition-setup.service. Feb 9 19:36:16.458087 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:36:16.468410 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:36:16.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:16.517465 kernel: hv_netvsc 000d3ab9-11fe-000d-3ab9-11fe000d3ab9 eth0: Data path switched to VF: enP44527s1 Feb 9 19:36:16.517747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:36:16.518134 systemd-networkd[828]: enP44527s1: Link UP Feb 9 19:36:16.518279 systemd-networkd[828]: eth0: Link UP Feb 9 19:36:16.518506 systemd-networkd[828]: eth0: Gained carrier Feb 9 19:36:16.526517 systemd-networkd[828]: enP44527s1: Gained carrier Feb 9 19:36:16.545460 systemd-networkd[828]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:36:18.552513 systemd-networkd[828]: eth0: Gained IPv6LL Feb 9 19:36:19.175281 ignition[849]: Ignition 2.14.0 Feb 9 19:36:19.175300 ignition[849]: Stage: fetch-offline Feb 9 19:36:19.175440 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:19.175507 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:19.256812 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:19.257050 ignition[849]: parsed url from cmdline: "" Feb 9 19:36:19.257054 ignition[849]: no config URL provided Feb 9 19:36:19.257060 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:36:19.286769 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:36:19.286806 kernel: audit: type=1130 audit(1707507379.265:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.262558 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:36:19.257070 ignition[849]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:36:19.267308 systemd[1]: Starting ignition-fetch.service... Feb 9 19:36:19.257078 ignition[849]: failed to fetch config: resource requires networking Feb 9 19:36:19.257572 ignition[849]: Ignition finished successfully Feb 9 19:36:19.276209 ignition[858]: Ignition 2.14.0 Feb 9 19:36:19.276218 ignition[858]: Stage: fetch Feb 9 19:36:19.276329 ignition[858]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:19.276367 ignition[858]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:19.305867 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:19.308330 ignition[858]: parsed url from cmdline: "" Feb 9 19:36:19.308338 ignition[858]: no config URL provided Feb 9 19:36:19.308347 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:36:19.308374 ignition[858]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:36:19.308423 ignition[858]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:36:19.380157 ignition[858]: GET result: OK Feb 9 19:36:19.380189 ignition[858]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 9 19:36:19.524254 ignition[858]: opening config device: "/dev/sr0" Feb 9 19:36:19.524622 ignition[858]: getting drive status for "/dev/sr0" Feb 9 19:36:19.524667 ignition[858]: drive status: OK Feb 9 19:36:19.524710 ignition[858]: mounting config device Feb 9 19:36:19.524734 ignition[858]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3121036302" Feb 9 19:36:19.553017 ignition[858]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3121036302" Feb 9 19:36:19.557765 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/10 00:00 (1000) Feb 9 19:36:19.556535 systemd[1]: tmp-ignition\x2dazure3121036302.mount: Deactivated successfully. Feb 9 19:36:19.554071 ignition[858]: checking for config drive Feb 9 19:36:19.554520 ignition[858]: reading config Feb 9 19:36:19.556113 ignition[858]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3121036302" Feb 9 19:36:19.557602 ignition[858]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3121036302" Feb 9 19:36:19.557620 ignition[858]: config has been read from custom data Feb 9 19:36:19.557709 ignition[858]: parsing config with SHA512: dd6841775dddd70c346aef7e108f54cc3c813258f0670981f02e4ff357949952b8bb854b45f65b441be55b7f536b688c7a00c1c2bc81fb0d77490a2125b8e198 Feb 9 19:36:19.598024 unknown[858]: fetched base config from "system" Feb 9 19:36:19.598037 unknown[858]: fetched base config from "system" Feb 9 19:36:19.598783 ignition[858]: fetch: fetch complete Feb 9 19:36:19.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.598046 unknown[858]: fetched user config from "azure" Feb 9 19:36:19.623912 kernel: audit: type=1130 audit(1707507379.606:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.598789 ignition[858]: fetch: fetch passed Feb 9 19:36:19.602976 systemd[1]: Finished ignition-fetch.service. Feb 9 19:36:19.598838 ignition[858]: Ignition finished successfully Feb 9 19:36:19.608485 systemd[1]: Starting ignition-kargs.service... Feb 9 19:36:19.628642 ignition[866]: Ignition 2.14.0 Feb 9 19:36:19.628649 ignition[866]: Stage: kargs Feb 9 19:36:19.628764 ignition[866]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:19.628786 ignition[866]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:19.631788 ignition[866]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:19.641398 ignition[866]: kargs: kargs passed Feb 9 19:36:19.641459 ignition[866]: Ignition finished successfully Feb 9 19:36:19.645671 systemd[1]: Finished ignition-kargs.service. Feb 9 19:36:19.665976 kernel: audit: type=1130 audit(1707507379.650:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.660413 ignition[872]: Ignition 2.14.0 Feb 9 19:36:19.651769 systemd[1]: Starting ignition-disks.service... Feb 9 19:36:19.660423 ignition[872]: Stage: disks Feb 9 19:36:19.660576 ignition[872]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:19.660599 ignition[872]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:19.677693 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:19.681944 ignition[872]: disks: disks passed Feb 9 19:36:19.682018 ignition[872]: Ignition finished successfully Feb 9 19:36:19.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.682964 systemd[1]: Finished ignition-disks.service. Feb 9 19:36:19.701470 kernel: audit: type=1130 audit(1707507379.685:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.685779 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:36:19.701449 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:36:19.705594 systemd[1]: Reached target local-fs.target. Feb 9 19:36:19.711486 systemd[1]: Reached target sysinit.target. Feb 9 19:36:19.713469 systemd[1]: Reached target basic.target. Feb 9 19:36:19.718083 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:36:19.759742 systemd-fsck[880]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:36:19.764765 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:36:19.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.783398 kernel: audit: type=1130 audit(1707507379.766:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:19.781524 systemd[1]: Mounting sysroot.mount... Feb 9 19:36:19.799397 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:36:19.800244 systemd[1]: Mounted sysroot.mount. Feb 9 19:36:19.803918 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:36:19.828467 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:36:19.834159 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:36:19.838909 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:36:19.838985 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:36:19.848777 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:36:19.882249 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:36:19.885593 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:36:19.903177 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:36:19.914814 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (890) Feb 9 19:36:19.914855 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:36:19.914886 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:36:19.919058 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:36:19.922609 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:36:19.929166 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:36:19.935722 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:36:19.942321 initrd-setup-root[937]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:36:20.304977 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:36:20.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.311034 systemd[1]: Starting ignition-mount.service... Feb 9 19:36:20.329818 kernel: audit: type=1130 audit(1707507380.309:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.329652 systemd[1]: Starting sysroot-boot.service... Feb 9 19:36:20.353409 ignition[956]: INFO : Ignition 2.14.0 Feb 9 19:36:20.353409 ignition[956]: INFO : Stage: mount Feb 9 19:36:20.360944 ignition[956]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:20.360944 ignition[956]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:20.391892 kernel: audit: type=1130 audit(1707507380.359:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.355346 systemd[1]: Finished sysroot-boot.service. Feb 9 19:36:20.407480 kernel: audit: type=1130 audit(1707507380.391:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:20.407503 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:20.407503 ignition[956]: INFO : mount: mount passed Feb 9 19:36:20.407503 ignition[956]: INFO : Ignition finished successfully Feb 9 19:36:20.380763 systemd[1]: Finished ignition-mount.service. Feb 9 19:36:20.554596 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:36:20.554717 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:36:21.220623 coreos-metadata[889]: Feb 09 19:36:21.220 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:36:21.233334 coreos-metadata[889]: Feb 09 19:36:21.233 INFO Fetch successful Feb 9 19:36:21.266090 coreos-metadata[889]: Feb 09 19:36:21.266 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:36:21.277095 coreos-metadata[889]: Feb 09 19:36:21.276 INFO Fetch successful Feb 9 19:36:21.291874 coreos-metadata[889]: Feb 09 19:36:21.291 INFO wrote hostname ci-3510.3.2-a-4c52a92a5f to /sysroot/etc/hostname Feb 9 19:36:21.297696 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:36:21.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:21.315933 kernel: audit: type=1130 audit(1707507381.299:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:21.313800 systemd[1]: Starting ignition-files.service... Feb 9 19:36:21.321961 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:36:21.335391 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (968) Feb 9 19:36:21.344758 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:36:21.344813 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:36:21.344825 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:36:21.352985 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:36:21.368394 ignition[987]: INFO : Ignition 2.14.0 Feb 9 19:36:21.371176 ignition[987]: INFO : Stage: files Feb 9 19:36:21.371176 ignition[987]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:21.371176 ignition[987]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:21.385713 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:21.401877 ignition[987]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:36:21.417334 ignition[987]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:36:21.417334 ignition[987]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:36:21.465819 ignition[987]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:36:21.469712 ignition[987]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:36:21.469712 ignition[987]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:36:21.469712 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:36:21.469712 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:36:21.466421 unknown[987]: wrote ssh authorized keys file for user: core Feb 9 19:36:21.961867 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:36:22.095984 ignition[987]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:36:22.103828 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:36:22.103828 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:36:22.103828 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:36:22.928024 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:36:23.063827 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:36:23.069256 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:36:23.069256 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:36:23.564876 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:36:23.735382 ignition[987]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:36:23.743428 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:36:23.743428 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:36:23.743428 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:36:23.955145 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:36:24.272479 ignition[987]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 19:36:24.280923 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:36:24.280923 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:36:24.280923 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:36:24.410224 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:36:24.715776 ignition[987]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:36:24.722376 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:36:24.722376 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:36:24.722376 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:36:24.845390 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:36:25.339958 ignition[987]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:36:25.348446 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:36:25.348446 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:36:25.348446 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:36:25.348446 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:36:25.348446 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:36:25.878838 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:36:25.978978 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:36:25.985098 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:36:26.421617 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:36:26.430432 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:36:26.430432 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:36:26.430432 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:36:26.451434 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (992) Feb 9 19:36:26.451468 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3471212470" Feb 9 19:36:26.451468 ignition[987]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3471212470": device or resource busy Feb 9 19:36:26.451468 ignition[987]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3471212470", trying btrfs: device or resource busy Feb 9 19:36:26.451468 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3471212470" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3471212470" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3471212470" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3471212470" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:36:26.477600 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:36:26.456081 systemd[1]: mnt-oem3471212470.mount: Deactivated successfully. Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem491086642" Feb 9 19:36:26.513351 ignition[987]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem491086642": device or resource busy Feb 9 19:36:26.513351 ignition[987]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem491086642", trying btrfs: device or resource busy Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem491086642" Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem491086642" Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem491086642" Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem491086642" Feb 9 19:36:26.513351 ignition[987]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:36:26.513351 ignition[987]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:36:26.605379 kernel: audit: type=1130 audit(1707507386.521:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.513634 systemd[1]: Finished ignition-files.service. Feb 9 19:36:26.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:36:26.617147 ignition[987]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:36:26.617147 ignition[987]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:36:26.617147 ignition[987]: INFO : files: files passed Feb 9 19:36:26.617147 ignition[987]: INFO : Ignition finished successfully Feb 9 19:36:26.764204 kernel: audit: type=1130 audit(1707507386.605:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.764235 kernel: audit: type=1131 audit(1707507386.616:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.764249 kernel: audit: type=1130 audit(1707507386.636:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.764263 kernel: audit: type=1130 audit(1707507386.679:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.764273 kernel: audit: type=1131 audit(1707507386.679:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.764288 kernel: audit: type=1130 audit(1707507386.727:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.539013 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:36:26.553836 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:36:26.765092 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:36:26.592136 systemd[1]: Starting ignition-quench.service... Feb 9 19:36:26.602445 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:36:26.602560 systemd[1]: Finished ignition-quench.service. Feb 9 19:36:26.617615 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:36:26.637096 systemd[1]: Reached target ignition-complete.target. Feb 9 19:36:26.655518 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:36:26.675488 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:36:26.675590 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:36:26.679771 systemd[1]: Reached target initrd-fs.target. Feb 9 19:36:26.786983 kernel: audit: type=1130 audit(1707507386.769:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.707634 systemd[1]: Reached target initrd.target. Feb 9 19:36:26.708622 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:36:26.709538 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:36:26.723136 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:36:26.746484 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:36:26.758544 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:36:26.758644 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:36:26.783810 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:36:26.783889 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:36:26.784468 systemd[1]: Stopped target timers.target. Feb 9 19:36:26.784999 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:36:26.785061 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:36:26.785561 systemd[1]: Stopped target initrd.target. Feb 9 19:36:26.785991 systemd[1]: Stopped target basic.target. Feb 9 19:36:26.790599 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:36:26.791029 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:36:26.791918 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:36:26.913896 kernel: audit: type=1131 audit(1707507386.769:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.913938 kernel: audit: type=1131 audit(1707507386.769:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.792401 systemd[1]: Stopped target remote-fs.target. Feb 9 19:36:26.792873 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:36:26.793325 systemd[1]: Stopped target sysinit.target. Feb 9 19:36:26.793714 systemd[1]: Stopped target local-fs.target. Feb 9 19:36:26.794133 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:36:26.794532 systemd[1]: Stopped target swap.target. Feb 9 19:36:26.794915 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:36:26.794972 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:36:26.795380 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:36:26.795764 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:36:26.939390 ignition[1025]: INFO : Ignition 2.14.0 Feb 9 19:36:26.939390 ignition[1025]: INFO : Stage: umount Feb 9 19:36:26.939390 ignition[1025]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:36:26.939390 ignition[1025]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:36:26.952628 iscsid[837]: iscsid shutting down. Feb 9 19:36:26.795799 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:36:26.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.960913 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:36:26.796253 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:36:26.967346 ignition[1025]: INFO : umount: umount passed Feb 9 19:36:26.967346 ignition[1025]: INFO : Ignition finished successfully Feb 9 19:36:26.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.796291 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:36:26.815002 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:36:26.815062 systemd[1]: Stopped ignition-files.service. Feb 9 19:36:26.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.815491 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:36:26.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.815525 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:36:26.902591 systemd[1]: Stopping ignition-mount.service... Feb 9 19:36:26.937546 systemd[1]: Stopping iscsid.service... Feb 9 19:36:26.940275 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:36:26.954331 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:36:26.954449 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:36:27.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:26.958796 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:36:26.958854 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:36:26.970962 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:36:26.971085 systemd[1]: Stopped iscsid.service. Feb 9 19:36:26.973892 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:36:26.973988 systemd[1]: Stopped ignition-mount.service. Feb 9 19:36:26.977984 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:36:26.978040 systemd[1]: Stopped ignition-disks.service. Feb 9 19:36:26.985138 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:36:26.985185 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:36:26.989019 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:36:26.989072 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:36:26.990106 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:36:26.990144 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:36:26.990506 systemd[1]: Stopped target paths.target. Feb 9 19:36:26.990910 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:36:26.998430 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:36:27.002301 systemd[1]: Stopped target slices.target. Feb 9 19:36:27.004203 systemd[1]: Stopped target sockets.target. Feb 9 19:36:27.008139 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:36:27.008196 systemd[1]: Closed iscsid.socket. Feb 9 19:36:27.016255 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:36:27.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.016318 systemd[1]: Stopped ignition-setup.service. Feb 9 19:36:27.021083 systemd[1]: Stopping iscsiuio.service... Feb 9 19:36:27.035036 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:36:27.035561 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:36:27.035666 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:36:27.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.036127 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:36:27.036164 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:36:27.076880 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:36:27.076981 systemd[1]: Stopped iscsiuio.service. Feb 9 19:36:27.080779 systemd[1]: Stopped target network.target. Feb 9 19:36:27.084345 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:36:27.084524 systemd[1]: Closed iscsiuio.socket. Feb 9 19:36:27.088776 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:36:27.092238 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:36:27.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.094800 systemd-networkd[828]: eth0: DHCPv6 lease lost Feb 9 19:36:27.104403 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:36:27.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.104529 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:36:27.116000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:36:27.118000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:36:27.110193 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:36:27.110294 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:36:27.116919 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:36:27.116960 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:36:27.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.121489 systemd[1]: Stopping network-cleanup.service... Feb 9 19:36:27.124966 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:36:27.126804 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:36:27.134151 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:36:27.134216 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:36:27.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.146392 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:36:27.146461 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:36:27.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.153221 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:36:27.156367 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:36:27.160573 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:36:27.160740 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:36:27.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.166896 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:36:27.166952 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:36:27.173546 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:36:27.173593 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:36:27.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.178180 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:36:27.178233 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:36:27.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.182179 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:36:27.182229 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:36:27.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.187493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:36:27.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.187534 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:36:27.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.192302 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:36:27.205430 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:36:27.205491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:36:27.209886 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:36:27.209940 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:36:27.239560 kernel: hv_netvsc 000d3ab9-11fe-000d-3ab9-11fe000d3ab9 eth0: Data path switched from VF: enP44527s1 Feb 9 19:36:27.214275 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:36:27.214324 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:36:27.216812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:36:27.216909 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:36:27.258216 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:36:27.258344 systemd[1]: Stopped network-cleanup.service. Feb 9 19:36:27.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:27.264709 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:36:27.269932 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:36:27.281574 systemd[1]: Switching root. Feb 9 19:36:27.306969 systemd-journald[183]: Journal stopped Feb 9 19:36:38.735952 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:36:38.735981 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:36:38.735992 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:36:38.736004 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:36:38.736012 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:36:38.736022 kernel: SELinux: policy capability open_perms=1 Feb 9 19:36:38.736033 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:36:38.736045 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:36:38.736053 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:36:38.736067 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:36:38.736075 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:36:38.736084 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:36:38.736094 systemd[1]: Successfully loaded SELinux policy in 228.190ms. Feb 9 19:36:38.736107 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.253ms. Feb 9 19:36:38.736120 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:36:38.736132 systemd[1]: Detected virtualization microsoft. Feb 9 19:36:38.736142 systemd[1]: Detected architecture x86-64. Feb 9 19:36:38.736152 systemd[1]: Detected first boot. Feb 9 19:36:38.736165 systemd[1]: Hostname set to . Feb 9 19:36:38.736175 systemd[1]: Initializing machine ID from random generator. Feb 9 19:36:38.736187 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:36:38.736197 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:36:38.736209 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:36:38.736220 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:36:38.736232 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:36:38.736245 kernel: kauditd_printk_skb: 51 callbacks suppressed Feb 9 19:36:38.736255 kernel: audit: type=1334 audit(1707507398.208:92): prog-id=12 op=LOAD Feb 9 19:36:38.736264 kernel: audit: type=1334 audit(1707507398.208:93): prog-id=3 op=UNLOAD Feb 9 19:36:38.736275 kernel: audit: type=1334 audit(1707507398.213:94): prog-id=13 op=LOAD Feb 9 19:36:38.736284 kernel: audit: type=1334 audit(1707507398.217:95): prog-id=14 op=LOAD Feb 9 19:36:38.736294 kernel: audit: type=1334 audit(1707507398.217:96): prog-id=4 op=UNLOAD Feb 9 19:36:38.736303 kernel: audit: type=1334 audit(1707507398.217:97): prog-id=5 op=UNLOAD Feb 9 19:36:38.736312 kernel: audit: type=1334 audit(1707507398.223:98): prog-id=15 op=LOAD Feb 9 19:36:38.736322 kernel: audit: type=1334 audit(1707507398.223:99): prog-id=12 op=UNLOAD Feb 9 19:36:38.736334 kernel: audit: type=1334 audit(1707507398.242:100): prog-id=16 op=LOAD Feb 9 19:36:38.736342 kernel: audit: type=1334 audit(1707507398.247:101): prog-id=17 op=LOAD Feb 9 19:36:38.736352 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:36:38.736373 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:36:38.736383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:36:38.736396 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:36:38.736408 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:36:38.736422 systemd[1]: Created slice system-getty.slice. Feb 9 19:36:38.736434 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:36:38.736446 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:36:38.736456 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:36:38.736468 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:36:38.736477 systemd[1]: Created slice user.slice. Feb 9 19:36:38.736490 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:36:38.736500 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:36:38.736512 systemd[1]: Set up automount boot.automount. Feb 9 19:36:38.736524 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:36:38.736535 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:36:38.736545 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:36:38.736558 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:36:38.736567 systemd[1]: Reached target integritysetup.target. Feb 9 19:36:38.736580 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:36:38.736590 systemd[1]: Reached target remote-fs.target. Feb 9 19:36:38.736603 systemd[1]: Reached target slices.target. Feb 9 19:36:38.736613 systemd[1]: Reached target swap.target. Feb 9 19:36:38.736625 systemd[1]: Reached target torcx.target. Feb 9 19:36:38.736638 systemd[1]: Reached target veritysetup.target. Feb 9 19:36:38.736647 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:36:38.736659 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:36:38.736671 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:36:38.736685 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:36:38.736695 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:36:38.736707 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:36:38.736717 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:36:38.736729 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:36:38.736739 systemd[1]: Mounting media.mount... Feb 9 19:36:38.736749 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:36:38.736763 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:36:38.736776 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:36:38.736786 systemd[1]: Mounting tmp.mount... Feb 9 19:36:38.736798 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:36:38.736811 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:36:38.736821 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:36:38.736832 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:36:38.736843 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:36:38.736855 systemd[1]: Starting modprobe@drm.service... Feb 9 19:36:38.736869 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:36:38.736879 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:36:38.736891 systemd[1]: Starting modprobe@loop.service... Feb 9 19:36:38.736901 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:36:38.736914 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:36:38.736925 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:36:38.736936 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:36:38.736947 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:36:38.736957 systemd[1]: Stopped systemd-journald.service. Feb 9 19:36:38.736968 systemd[1]: Starting systemd-journald.service... Feb 9 19:36:38.736978 kernel: loop: module loaded Feb 9 19:36:38.736989 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:36:38.736999 kernel: fuse: init (API version 7.34) Feb 9 19:36:38.737012 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:36:38.737021 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:36:38.737030 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:36:38.737043 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:36:38.737056 systemd[1]: Stopped verity-setup.service. Feb 9 19:36:38.737068 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:36:38.737080 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:36:38.737090 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:36:38.737102 systemd[1]: Mounted media.mount. Feb 9 19:36:38.737117 systemd-journald[1152]: Journal started Feb 9 19:36:38.737165 systemd-journald[1152]: Runtime Journal (/run/log/journal/11a21245c4bb40a7b8a96d0705061dac) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:36:29.115000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:36:29.863000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:36:29.876000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:36:29.876000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:36:29.876000 audit: BPF prog-id=10 op=LOAD Feb 9 19:36:29.876000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:36:29.876000 audit: BPF prog-id=11 op=LOAD Feb 9 19:36:29.876000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:36:30.923000 audit[1058]: AVC avc: denied { associate } for pid=1058 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:36:30.923000 audit[1058]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1041 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:30.923000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:36:30.933000 audit[1058]: AVC avc: denied { associate } for pid=1058 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:36:30.933000 audit[1058]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059a9 a2=1ed a3=0 items=2 ppid=1041 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:30.933000 audit: CWD cwd="/" Feb 9 19:36:30.933000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:30.933000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:30.933000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:36:38.208000 audit: BPF prog-id=12 op=LOAD Feb 9 19:36:38.208000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:36:38.213000 audit: BPF prog-id=13 op=LOAD Feb 9 19:36:38.217000 audit: BPF prog-id=14 op=LOAD Feb 9 19:36:38.217000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:36:38.217000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:36:38.223000 audit: BPF prog-id=15 op=LOAD Feb 9 19:36:38.223000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:36:38.242000 audit: BPF prog-id=16 op=LOAD Feb 9 19:36:38.247000 audit: BPF prog-id=17 op=LOAD Feb 9 19:36:38.247000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:36:38.247000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:36:38.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.269000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:36:38.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.629000 audit: BPF prog-id=18 op=LOAD Feb 9 19:36:38.629000 audit: BPF prog-id=19 op=LOAD Feb 9 19:36:38.629000 audit: BPF prog-id=20 op=LOAD Feb 9 19:36:38.629000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:36:38.629000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:36:38.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.732000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:36:38.732000 audit[1152]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff9c1c8230 a2=4000 a3=7fff9c1c82cc items=0 ppid=1 pid=1152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:38.732000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:36:30.888688 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:36:38.206937 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:36:30.889251 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:36:38.248586 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:36:30.889273 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:36:30.889313 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:36:30.889324 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:36:30.889386 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:36:30.889406 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:36:30.889638 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:36:30.889707 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:36:30.889732 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:36:30.890290 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:36:30.890355 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:36:30.890417 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:36:30.890438 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:36:30.890465 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:36:30.890486 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:36:37.203319 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:36:37.203603 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:36:37.203716 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:36:37.203885 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:36:37.203934 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:36:37.203989 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2024-02-09T19:36:37Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:36:38.745397 systemd[1]: Started systemd-journald.service. Feb 9 19:36:38.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.746252 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:36:38.748400 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:36:38.750906 systemd[1]: Mounted tmp.mount. Feb 9 19:36:38.753057 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:36:38.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.755547 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:36:38.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.759599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:36:38.759756 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:36:38.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.763239 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:36:38.763403 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:36:38.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.765605 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:36:38.765751 systemd[1]: Finished modprobe@drm.service. Feb 9 19:36:38.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.767905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:36:38.768050 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:36:38.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.770580 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:36:38.770727 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:36:38.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.773903 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:36:38.774130 systemd[1]: Finished modprobe@loop.service. Feb 9 19:36:38.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.776738 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:36:38.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.779576 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:36:38.782050 systemd[1]: Reached target network-pre.target. Feb 9 19:36:38.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.785130 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:36:38.788398 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:36:38.792796 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:36:38.804569 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:36:38.807959 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:36:38.810203 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:36:38.811522 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:36:38.813610 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:36:38.815031 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:36:38.821259 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:36:38.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.824806 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:36:38.827141 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:36:38.830875 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:36:38.849252 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:36:38.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.868727 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:36:38.879909 systemd-journald[1152]: Time spent on flushing to /var/log/journal/11a21245c4bb40a7b8a96d0705061dac is 34.057ms for 1206 entries. Feb 9 19:36:38.879909 systemd-journald[1152]: System Journal (/var/log/journal/11a21245c4bb40a7b8a96d0705061dac) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:36:38.949223 systemd-journald[1152]: Received client request to flush runtime journal. Feb 9 19:36:38.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.903225 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:36:38.949660 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:36:38.907038 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:36:38.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:38.931422 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:36:38.950404 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:36:39.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:39.508483 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:36:39.512714 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:36:39.728709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:36:39.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:39.858621 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:36:39.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:39.862000 audit: BPF prog-id=21 op=LOAD Feb 9 19:36:39.862000 audit: BPF prog-id=22 op=LOAD Feb 9 19:36:39.862000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:36:39.862000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:36:39.863449 systemd[1]: Starting systemd-udevd.service... Feb 9 19:36:39.881595 systemd-udevd[1187]: Using default interface naming scheme 'v252'. Feb 9 19:36:40.105816 systemd[1]: Started systemd-udevd.service. Feb 9 19:36:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:40.109000 audit: BPF prog-id=23 op=LOAD Feb 9 19:36:40.110842 systemd[1]: Starting systemd-networkd.service... Feb 9 19:36:40.171000 audit: BPF prog-id=24 op=LOAD Feb 9 19:36:40.171000 audit: BPF prog-id=25 op=LOAD Feb 9 19:36:40.171000 audit: BPF prog-id=26 op=LOAD Feb 9 19:36:40.172764 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:36:40.192516 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:36:40.226318 systemd[1]: Started systemd-userdbd.service. Feb 9 19:36:40.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:40.251390 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:36:40.266402 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:36:40.271000 audit[1201]: AVC avc: denied { confidentiality } for pid=1201 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:36:40.278392 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:36:40.301385 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:36:40.271000 audit[1201]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5650bf03f330 a1=f884 a2=7f2c1979bbc5 a3=5 items=12 ppid=1187 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:40.271000 audit: CWD cwd="/" Feb 9 19:36:40.271000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=1 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=2 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=3 name=(null) inode=15629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=4 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=5 name=(null) inode=15630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=6 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=7 name=(null) inode=15631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=8 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=9 name=(null) inode=15632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=10 name=(null) inode=15628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PATH item=11 name=(null) inode=15633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:36:40.271000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:36:40.331020 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:36:40.331158 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:36:40.340386 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:36:40.340492 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:36:40.346796 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:36:40.355135 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:36:40.357432 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:36:40.357504 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:36:40.357532 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:36:40.614579 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1189) Feb 9 19:36:40.692087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:36:40.753461 systemd-networkd[1193]: lo: Link UP Feb 9 19:36:40.753909 systemd-networkd[1193]: lo: Gained carrier Feb 9 19:36:40.754704 systemd-networkd[1193]: Enumeration completed Feb 9 19:36:40.754963 systemd[1]: Started systemd-networkd.service. Feb 9 19:36:40.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:40.759086 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:36:40.783145 systemd-networkd[1193]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:36:40.794577 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:36:40.854585 kernel: mlx5_core adef:00:02.0 enP44527s1: Link up Feb 9 19:36:40.920082 kernel: hv_netvsc 000d3ab9-11fe-000d-3ab9-11fe000d3ab9 eth0: Data path switched to VF: enP44527s1 Feb 9 19:36:40.919961 systemd-networkd[1193]: enP44527s1: Link UP Feb 9 19:36:40.920104 systemd-networkd[1193]: eth0: Link UP Feb 9 19:36:40.920109 systemd-networkd[1193]: eth0: Gained carrier Feb 9 19:36:40.924955 systemd-networkd[1193]: enP44527s1: Gained carrier Feb 9 19:36:40.942940 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:36:40.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:40.946988 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:36:40.951711 systemd-networkd[1193]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:36:41.176031 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:36:41.205799 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:36:41.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:41.208875 systemd[1]: Reached target cryptsetup.target. Feb 9 19:36:41.212768 systemd[1]: Starting lvm2-activation.service... Feb 9 19:36:41.217736 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:36:41.239714 systemd[1]: Finished lvm2-activation.service. Feb 9 19:36:41.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:41.242708 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:36:41.245079 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:36:41.245117 systemd[1]: Reached target local-fs.target. Feb 9 19:36:41.247209 systemd[1]: Reached target machines.target. Feb 9 19:36:41.250691 systemd[1]: Starting ldconfig.service... Feb 9 19:36:41.252995 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:36:41.253101 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:36:41.254393 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:36:41.257981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:36:41.261950 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:36:41.264222 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:36:41.264348 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:36:41.265704 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:36:41.285684 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) Feb 9 19:36:41.287480 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:36:41.294968 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:36:42.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:42.035052 systemd-networkd[1193]: eth0: Gained IPv6LL Feb 9 19:36:42.038635 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:36:42.410912 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:36:42.422166 systemd-tmpfiles[1271]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:36:42.426575 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:36:42.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:42.597736 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:36:42.598435 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:36:42.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.148021 systemd-fsck[1277]: fsck.fat 4.2 (2021-01-31) Feb 9 19:36:43.148021 systemd-fsck[1277]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:36:43.150428 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:36:43.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.155320 systemd[1]: Mounting boot.mount... Feb 9 19:36:43.166973 systemd[1]: Mounted boot.mount. Feb 9 19:36:43.180958 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:36:43.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.822371 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:36:43.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.826539 systemd[1]: Starting audit-rules.service... Feb 9 19:36:43.827509 kernel: kauditd_printk_skb: 79 callbacks suppressed Feb 9 19:36:43.827580 kernel: audit: type=1130 audit(1707507403.823:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.843609 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:36:43.847171 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:36:43.855699 kernel: audit: type=1334 audit(1707507403.849:165): prog-id=27 op=LOAD Feb 9 19:36:43.849000 audit: BPF prog-id=27 op=LOAD Feb 9 19:36:43.852242 systemd[1]: Starting systemd-resolved.service... Feb 9 19:36:43.857000 audit: BPF prog-id=28 op=LOAD Feb 9 19:36:43.860252 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:36:43.862630 kernel: audit: type=1334 audit(1707507403.857:166): prog-id=28 op=LOAD Feb 9 19:36:43.866021 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:36:43.891000 audit[1293]: SYSTEM_BOOT pid=1293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.894647 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:36:43.906582 kernel: audit: type=1127 audit(1707507403.891:167): pid=1293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.920168 kernel: audit: type=1130 audit(1707507403.906:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.924100 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:36:43.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.926643 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:36:43.937598 kernel: audit: type=1130 audit(1707507403.925:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.971926 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:36:43.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.974690 systemd[1]: Reached target time-set.target. Feb 9 19:36:43.988851 kernel: audit: type=1130 audit(1707507403.972:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:43.994426 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:36:43.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:44.011592 kernel: audit: type=1130 audit(1707507403.995:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:36:44.022335 systemd-resolved[1287]: Positive Trust Anchors: Feb 9 19:36:44.022353 systemd-resolved[1287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:36:44.022395 systemd-resolved[1287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:36:44.110000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:36:44.112757 systemd[1]: Finished audit-rules.service. Feb 9 19:36:44.113872 augenrules[1304]: No rules Feb 9 19:36:44.117916 systemd-resolved[1287]: Using system hostname 'ci-3510.3.2-a-4c52a92a5f'. Feb 9 19:36:44.110000 audit[1304]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc58f738b0 a2=420 a3=0 items=0 ppid=1283 pid=1304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:44.138984 kernel: audit: type=1305 audit(1707507404.110:172): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:36:44.139075 kernel: audit: type=1300 audit(1707507404.110:172): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc58f738b0 a2=420 a3=0 items=0 ppid=1283 pid=1304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:36:44.110000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:36:44.140970 systemd[1]: Started systemd-resolved.service. Feb 9 19:36:44.143280 systemd[1]: Reached target network.target. Feb 9 19:36:44.145294 systemd[1]: Reached target network-online.target. Feb 9 19:36:44.147513 systemd[1]: Reached target nss-lookup.target. Feb 9 19:36:48.120709 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:36:48.130527 systemd[1]: Finished ldconfig.service. Feb 9 19:36:48.134205 systemd[1]: Starting systemd-update-done.service... Feb 9 19:36:48.143077 systemd[1]: Finished systemd-update-done.service. Feb 9 19:36:48.145612 systemd[1]: Reached target sysinit.target. Feb 9 19:36:48.147858 systemd[1]: Started motdgen.path. Feb 9 19:36:48.149939 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:36:48.152968 systemd[1]: Started logrotate.timer. Feb 9 19:36:48.155253 systemd[1]: Started mdadm.timer. Feb 9 19:36:48.157065 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:36:48.159701 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:36:48.159748 systemd[1]: Reached target paths.target. Feb 9 19:36:48.161565 systemd[1]: Reached target timers.target. Feb 9 19:36:48.163955 systemd[1]: Listening on dbus.socket. Feb 9 19:36:48.166506 systemd[1]: Starting docker.socket... Feb 9 19:36:48.180144 systemd[1]: Listening on sshd.socket. Feb 9 19:36:48.182290 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:36:48.182822 systemd[1]: Listening on docker.socket. Feb 9 19:36:48.184780 systemd[1]: Reached target sockets.target. Feb 9 19:36:48.186723 systemd[1]: Reached target basic.target. Feb 9 19:36:48.188756 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:36:48.188788 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:36:48.189994 systemd[1]: Starting containerd.service... Feb 9 19:36:48.193084 systemd[1]: Starting dbus.service... Feb 9 19:36:48.195954 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:36:48.199375 systemd[1]: Starting extend-filesystems.service... Feb 9 19:36:48.201641 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:36:48.203183 systemd[1]: Starting motdgen.service... Feb 9 19:36:48.210765 systemd[1]: Started nvidia.service. Feb 9 19:36:48.214064 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:36:48.217419 systemd[1]: Starting prepare-critools.service... Feb 9 19:36:48.222424 systemd[1]: Starting prepare-helm.service... Feb 9 19:36:48.225974 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:36:48.229567 systemd[1]: Starting sshd-keygen.service... Feb 9 19:36:48.235179 systemd[1]: Starting systemd-logind.service... Feb 9 19:36:48.237399 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:36:48.237501 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:36:48.238089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:36:48.239218 systemd[1]: Starting update-engine.service... Feb 9 19:36:48.242947 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:36:48.254199 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:36:48.255189 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:36:48.275392 jq[1329]: true Feb 9 19:36:48.280024 jq[1314]: false Feb 9 19:36:48.280599 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:36:48.280827 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda1 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda2 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda3 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found usr Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda4 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda6 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda7 Feb 9 19:36:48.289717 extend-filesystems[1315]: Found sda9 Feb 9 19:36:48.289717 extend-filesystems[1315]: Checking size of /dev/sda9 Feb 9 19:36:48.327872 jq[1339]: true Feb 9 19:36:48.328740 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:36:48.328968 systemd[1]: Finished motdgen.service. Feb 9 19:36:48.341663 systemd-logind[1327]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:36:48.350027 systemd-logind[1327]: New seat seat0. Feb 9 19:36:48.363681 env[1342]: time="2024-02-09T19:36:48.363496700Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:36:48.394793 extend-filesystems[1315]: Old size kept for /dev/sda9 Feb 9 19:36:48.397357 extend-filesystems[1315]: Found sr0 Feb 9 19:36:48.399011 env[1342]: time="2024-02-09T19:36:48.398750600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:36:48.399356 env[1342]: time="2024-02-09T19:36:48.399134400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.399925 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:36:48.400135 systemd[1]: Finished extend-filesystems.service. Feb 9 19:36:48.411477 tar[1334]: ./ Feb 9 19:36:48.411477 tar[1334]: ./loopback Feb 9 19:36:48.417664 tar[1335]: crictl Feb 9 19:36:48.420041 tar[1336]: linux-amd64/helm Feb 9 19:36:48.420772 env[1342]: time="2024-02-09T19:36:48.420371300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:36:48.420772 env[1342]: time="2024-02-09T19:36:48.420423900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.420772 env[1342]: time="2024-02-09T19:36:48.420737400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:36:48.420772 env[1342]: time="2024-02-09T19:36:48.420761900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.420969 env[1342]: time="2024-02-09T19:36:48.420780200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:36:48.420969 env[1342]: time="2024-02-09T19:36:48.420799000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.420969 env[1342]: time="2024-02-09T19:36:48.420897000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.421492 env[1342]: time="2024-02-09T19:36:48.421197300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:36:48.421492 env[1342]: time="2024-02-09T19:36:48.421391200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:36:48.421492 env[1342]: time="2024-02-09T19:36:48.421413300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:36:48.421492 env[1342]: time="2024-02-09T19:36:48.421475800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:36:48.421492 env[1342]: time="2024-02-09T19:36:48.421491700Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:36:48.446772 env[1342]: time="2024-02-09T19:36:48.446510500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:36:48.446772 env[1342]: time="2024-02-09T19:36:48.446626600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:36:48.446772 env[1342]: time="2024-02-09T19:36:48.446646700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:36:48.446772 env[1342]: time="2024-02-09T19:36:48.446734100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446819300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446853800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446870400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446888400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446904700Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446932200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446951500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.447047 env[1342]: time="2024-02-09T19:36:48.446971500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:36:48.447333 env[1342]: time="2024-02-09T19:36:48.447121800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:36:48.447333 env[1342]: time="2024-02-09T19:36:48.447254400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.447791200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.447840700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.447881900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.447970800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.447991700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.448079900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.448111100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.448130000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.448148400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.448234 env[1342]: time="2024-02-09T19:36:48.448164900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.448253200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.448272200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459472100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459515400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459537100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459567400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459592600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459612700Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459654600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:36:48.462784 env[1342]: time="2024-02-09T19:36:48.459707200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:36:48.461470 systemd[1]: Started containerd.service. Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.459988600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.460070700Z" level=info msg="Connect containerd service" Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.460119400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.460931100Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.461276300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:36:48.463145 env[1342]: time="2024-02-09T19:36:48.461328400Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474063800Z" level=info msg="Start subscribing containerd event" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474414800Z" level=info msg="Start recovering state" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474528800Z" level=info msg="Start event monitor" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474592200Z" level=info msg="Start snapshots syncer" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474613700Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.474625000Z" level=info msg="Start streaming server" Feb 9 19:36:48.492472 env[1342]: time="2024-02-09T19:36:48.485640400Z" level=info msg="containerd successfully booted in 0.125358s" Feb 9 19:36:48.466921 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:36:48.492838 bash[1364]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:36:48.540342 tar[1334]: ./bandwidth Feb 9 19:36:48.545974 dbus-daemon[1313]: [system] SELinux support is enabled Feb 9 19:36:48.546202 systemd[1]: Started dbus.service. Feb 9 19:36:48.551030 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:36:48.551067 systemd[1]: Reached target system-config.target. Feb 9 19:36:48.553647 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:36:48.553673 systemd[1]: Reached target user-config.target. Feb 9 19:36:48.557539 systemd[1]: Started systemd-logind.service. Feb 9 19:36:48.559900 dbus-daemon[1313]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:36:48.664938 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:36:48.682623 tar[1334]: ./ptp Feb 9 19:36:48.802573 tar[1334]: ./vlan Feb 9 19:36:48.910863 tar[1334]: ./host-device Feb 9 19:36:49.000871 tar[1334]: ./tuning Feb 9 19:36:49.069161 tar[1334]: ./vrf Feb 9 19:36:49.118578 update_engine[1328]: I0209 19:36:49.118002 1328 main.cc:92] Flatcar Update Engine starting Feb 9 19:36:49.145583 tar[1334]: ./sbr Feb 9 19:36:49.161740 systemd[1]: Started update-engine.service. Feb 9 19:36:49.164282 update_engine[1328]: I0209 19:36:49.163040 1328 update_check_scheduler.cc:74] Next update check in 8m21s Feb 9 19:36:49.166751 systemd[1]: Started locksmithd.service. Feb 9 19:36:49.235352 tar[1334]: ./tap Feb 9 19:36:49.319259 tar[1334]: ./dhcp Feb 9 19:36:49.439124 tar[1336]: linux-amd64/LICENSE Feb 9 19:36:49.439668 tar[1336]: linux-amd64/README.md Feb 9 19:36:49.452417 systemd[1]: Finished prepare-helm.service. Feb 9 19:36:49.496128 tar[1334]: ./static Feb 9 19:36:49.525162 tar[1334]: ./firewall Feb 9 19:36:49.601066 tar[1334]: ./macvlan Feb 9 19:36:49.642967 systemd[1]: Finished prepare-critools.service. Feb 9 19:36:49.672060 tar[1334]: ./dummy Feb 9 19:36:49.715991 tar[1334]: ./bridge Feb 9 19:36:49.764869 tar[1334]: ./ipvlan Feb 9 19:36:49.809394 tar[1334]: ./portmap Feb 9 19:36:49.851403 tar[1334]: ./host-local Feb 9 19:36:49.941980 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:36:50.004419 sshd_keygen[1337]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:36:50.025393 systemd[1]: Finished sshd-keygen.service. Feb 9 19:36:50.029803 systemd[1]: Starting issuegen.service... Feb 9 19:36:50.036737 systemd[1]: Started waagent.service. Feb 9 19:36:50.039762 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:36:50.039954 systemd[1]: Finished issuegen.service. Feb 9 19:36:50.043891 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:36:50.067489 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:36:50.071498 systemd[1]: Started getty@tty1.service. Feb 9 19:36:50.075103 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:36:50.077943 systemd[1]: Reached target getty.target. Feb 9 19:36:50.080176 systemd[1]: Reached target multi-user.target. Feb 9 19:36:50.083963 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:36:50.092241 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:36:50.092427 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:36:50.095219 systemd[1]: Startup finished in 876ms (firmware) + 19.997s (loader) + 999ms (kernel) + 15.998s (initrd) + 21.174s (userspace) = 59.046s. Feb 9 19:36:50.373520 login[1436]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:36:50.375315 login[1437]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:36:50.395599 systemd[1]: Created slice user-500.slice. Feb 9 19:36:50.397250 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:36:50.402825 systemd-logind[1327]: New session 2 of user core. Feb 9 19:36:50.407983 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:36:50.411457 systemd[1]: Starting user@500.service... Feb 9 19:36:50.424239 (systemd)[1443]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:36:50.576846 systemd[1443]: Queued start job for default target default.target. Feb 9 19:36:50.577502 systemd[1443]: Reached target paths.target. Feb 9 19:36:50.577530 systemd[1443]: Reached target sockets.target. Feb 9 19:36:50.577564 systemd[1443]: Reached target timers.target. Feb 9 19:36:50.577582 systemd[1443]: Reached target basic.target. Feb 9 19:36:50.577640 systemd[1443]: Reached target default.target. Feb 9 19:36:50.577677 systemd[1443]: Startup finished in 146ms. Feb 9 19:36:50.578074 systemd[1]: Started user@500.service. Feb 9 19:36:50.579472 systemd[1]: Started session-2.scope. Feb 9 19:36:50.873032 locksmithd[1418]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:36:51.373850 login[1436]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:36:51.378922 systemd[1]: Started session-1.scope. Feb 9 19:36:51.379445 systemd-logind[1327]: New session 1 of user core. Feb 9 19:36:54.481659 systemd-timesyncd[1289]: Timed out waiting for reply from 162.159.200.123:123 (0.flatcar.pool.ntp.org). Feb 9 19:36:54.485104 systemd-timesyncd[1289]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). Feb 9 19:36:54.485177 systemd-timesyncd[1289]: Initial clock synchronization to Fri 2024-02-09 19:36:54.485772 UTC. Feb 9 19:36:55.822644 waagent[1434]: 2024-02-09T19:36:55.822498Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:36:55.826699 waagent[1434]: 2024-02-09T19:36:55.826617Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:36:55.829140 waagent[1434]: 2024-02-09T19:36:55.829071Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:36:55.831536 waagent[1434]: 2024-02-09T19:36:55.831462Z INFO Daemon Daemon Run daemon Feb 9 19:36:55.834202 waagent[1434]: 2024-02-09T19:36:55.834138Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:36:55.848077 waagent[1434]: 2024-02-09T19:36:55.847948Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:36:55.855608 waagent[1434]: 2024-02-09T19:36:55.855472Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.855931Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.856764Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.858193Z INFO Daemon Daemon Activate resource disk Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.859677Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.867476Z INFO Daemon Daemon Found device: None Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.868244Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.869132Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.870894Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:36:55.881116 waagent[1434]: 2024-02-09T19:36:55.871801Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:36:55.882894 waagent[1434]: 2024-02-09T19:36:55.882776Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:36:55.885716 waagent[1434]: 2024-02-09T19:36:55.885611Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:36:55.886723 waagent[1434]: 2024-02-09T19:36:55.886670Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:36:55.887511 waagent[1434]: 2024-02-09T19:36:55.887463Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:36:55.909106 waagent[1434]: 2024-02-09T19:36:55.907982Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:36:55.976276 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:36:56.004025 waagent[1434]: 2024-02-09T19:36:56.003850Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:36:56.019378 waagent[1434]: 2024-02-09T19:36:56.004510Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:36:56.019378 waagent[1434]: 2024-02-09T19:36:56.005638Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:36:56.019378 waagent[1434]: 2024-02-09T19:36:56.006429Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:36:56.019378 waagent[1434]: 2024-02-09T19:36:56.007617Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:36:56.019378 waagent[1434]: 2024-02-09T19:36:56.008347Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:36:56.076476 waagent[1434]: 2024-02-09T19:36:56.076317Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:36:56.080347 waagent[1434]: 2024-02-09T19:36:56.080293Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:36:56.086511 waagent[1434]: 2024-02-09T19:36:56.086426Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:36:56.460426 waagent[1434]: 2024-02-09T19:36:56.460202Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:36:56.472589 waagent[1434]: 2024-02-09T19:36:56.472487Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:36:56.476145 waagent[1434]: 2024-02-09T19:36:56.476066Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:36:56.557135 waagent[1434]: 2024-02-09T19:36:56.556989Z INFO Daemon Daemon Found private key matching thumbprint 8CF2FF359DE4338B08713FA0206C874E66CE1536 Feb 9 19:36:56.568346 waagent[1434]: 2024-02-09T19:36:56.557601Z INFO Daemon Daemon Certificate with thumbprint 1BCF02D96DB45F53111815FF299128C4660DA5BB has no matching private key. Feb 9 19:36:56.568346 waagent[1434]: 2024-02-09T19:36:56.558905Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:36:56.583591 waagent[1434]: 2024-02-09T19:36:56.583501Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 2616b4c9-93d9-4f6b-8118-4851abd3dd2b New eTag: 17668696400325882315] Feb 9 19:36:56.591082 waagent[1434]: 2024-02-09T19:36:56.584441Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:36:56.596502 waagent[1434]: 2024-02-09T19:36:56.596435Z INFO Daemon Daemon Starting provisioning Feb 9 19:36:56.603597 waagent[1434]: 2024-02-09T19:36:56.596805Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:36:56.603597 waagent[1434]: 2024-02-09T19:36:56.597798Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-4c52a92a5f] Feb 9 19:36:56.615738 waagent[1434]: 2024-02-09T19:36:56.615611Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-4c52a92a5f] Feb 9 19:36:56.623687 waagent[1434]: 2024-02-09T19:36:56.616345Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:36:56.623687 waagent[1434]: 2024-02-09T19:36:56.617359Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:36:56.631069 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:36:56.631332 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:36:56.631412 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:36:56.631820 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:36:56.636601 systemd-networkd[1193]: eth0: DHCPv6 lease lost Feb 9 19:36:56.638135 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:36:56.638349 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:36:56.641046 systemd[1]: Starting systemd-networkd.service... Feb 9 19:36:56.672156 systemd-networkd[1485]: enP44527s1: Link UP Feb 9 19:36:56.672168 systemd-networkd[1485]: enP44527s1: Gained carrier Feb 9 19:36:56.673570 systemd-networkd[1485]: eth0: Link UP Feb 9 19:36:56.673658 systemd-networkd[1485]: eth0: Gained carrier Feb 9 19:36:56.674098 systemd-networkd[1485]: lo: Link UP Feb 9 19:36:56.674107 systemd-networkd[1485]: lo: Gained carrier Feb 9 19:36:56.674427 systemd-networkd[1485]: eth0: Gained IPv6LL Feb 9 19:36:56.674963 systemd-networkd[1485]: Enumeration completed Feb 9 19:36:56.675105 systemd[1]: Started systemd-networkd.service. Feb 9 19:36:56.677856 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:36:56.685313 waagent[1434]: 2024-02-09T19:36:56.678460Z INFO Daemon Daemon Create user account if not exists Feb 9 19:36:56.685313 waagent[1434]: 2024-02-09T19:36:56.682389Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:36:56.686067 waagent[1434]: 2024-02-09T19:36:56.685974Z INFO Daemon Daemon Configure sudoer Feb 9 19:36:56.686566 systemd-networkd[1485]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:36:56.690031 waagent[1434]: 2024-02-09T19:36:56.689964Z INFO Daemon Daemon Configure sshd Feb 9 19:36:56.692284 waagent[1434]: 2024-02-09T19:36:56.692223Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:36:56.715679 systemd-networkd[1485]: eth0: DHCPv4 address 10.200.8.13/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:36:56.719578 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:36:56.721710 waagent[1434]: 2024-02-09T19:36:56.721582Z INFO Daemon Daemon Decode custom data Feb 9 19:36:56.726052 waagent[1434]: 2024-02-09T19:36:56.722268Z INFO Daemon Daemon Save custom data Feb 9 19:36:58.023970 waagent[1434]: 2024-02-09T19:36:58.023870Z INFO Daemon Daemon Provisioning complete Feb 9 19:36:58.039229 waagent[1434]: 2024-02-09T19:36:58.039145Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:36:58.042413 waagent[1434]: 2024-02-09T19:36:58.042341Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:36:58.049539 waagent[1434]: 2024-02-09T19:36:58.049461Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:36:58.323477 waagent[1494]: 2024-02-09T19:36:58.323274Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:36:58.324266 waagent[1494]: 2024-02-09T19:36:58.324190Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:36:58.324415 waagent[1494]: 2024-02-09T19:36:58.324360Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:36:58.336584 waagent[1494]: 2024-02-09T19:36:58.336471Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:36:58.336794 waagent[1494]: 2024-02-09T19:36:58.336729Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:36:58.402408 waagent[1494]: 2024-02-09T19:36:58.402252Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8CF2FF359DE4338B08713FA0206C874E66CE1536 Feb 9 19:36:58.402719 waagent[1494]: 2024-02-09T19:36:58.402643Z INFO ExtHandler ExtHandler Certificate with thumbprint 1BCF02D96DB45F53111815FF299128C4660DA5BB has no matching private key. Feb 9 19:36:58.402985 waagent[1494]: 2024-02-09T19:36:58.402933Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:36:58.418239 waagent[1494]: 2024-02-09T19:36:58.418159Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 0465778d-6610-4d15-9b2c-1adeccfc3769 New eTag: 17668696400325882315] Feb 9 19:36:58.419120 waagent[1494]: 2024-02-09T19:36:58.419040Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:36:58.483952 waagent[1494]: 2024-02-09T19:36:58.483789Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:36:58.503568 waagent[1494]: 2024-02-09T19:36:58.503438Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1494 Feb 9 19:36:58.507215 waagent[1494]: 2024-02-09T19:36:58.507132Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:36:58.508530 waagent[1494]: 2024-02-09T19:36:58.508461Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:36:58.634172 waagent[1494]: 2024-02-09T19:36:58.634017Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:36:58.634701 waagent[1494]: 2024-02-09T19:36:58.634544Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:36:58.643255 waagent[1494]: 2024-02-09T19:36:58.643188Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:36:58.643846 waagent[1494]: 2024-02-09T19:36:58.643776Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:36:58.645013 waagent[1494]: 2024-02-09T19:36:58.644945Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:36:58.646385 waagent[1494]: 2024-02-09T19:36:58.646323Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:36:58.647065 waagent[1494]: 2024-02-09T19:36:58.647003Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:36:58.647465 waagent[1494]: 2024-02-09T19:36:58.647408Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:36:58.647846 waagent[1494]: 2024-02-09T19:36:58.647788Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:36:58.648217 waagent[1494]: 2024-02-09T19:36:58.648163Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:36:58.648451 waagent[1494]: 2024-02-09T19:36:58.648395Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:36:58.648844 waagent[1494]: 2024-02-09T19:36:58.648789Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:36:58.649428 waagent[1494]: 2024-02-09T19:36:58.649374Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:36:58.649957 waagent[1494]: 2024-02-09T19:36:58.649897Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:36:58.650743 waagent[1494]: 2024-02-09T19:36:58.650683Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:36:58.651091 waagent[1494]: 2024-02-09T19:36:58.651033Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:36:58.651391 waagent[1494]: 2024-02-09T19:36:58.651337Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:36:58.651684 waagent[1494]: 2024-02-09T19:36:58.651633Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:36:58.652140 waagent[1494]: 2024-02-09T19:36:58.652090Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:36:58.652349 waagent[1494]: 2024-02-09T19:36:58.652298Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:36:58.652349 waagent[1494]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:36:58.652349 waagent[1494]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:36:58.652349 waagent[1494]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:36:58.652349 waagent[1494]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:36:58.652349 waagent[1494]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:36:58.652349 waagent[1494]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:36:58.652670 waagent[1494]: 2024-02-09T19:36:58.652447Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:36:58.665246 waagent[1494]: 2024-02-09T19:36:58.665177Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:36:58.666724 waagent[1494]: 2024-02-09T19:36:58.666666Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:36:58.668658 waagent[1494]: 2024-02-09T19:36:58.668601Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:36:58.691999 waagent[1494]: 2024-02-09T19:36:58.691862Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1485' Feb 9 19:36:58.711532 waagent[1494]: 2024-02-09T19:36:58.711437Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:36:58.760372 waagent[1494]: 2024-02-09T19:36:58.760234Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:36:58.760372 waagent[1494]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:36:58.760372 waagent[1494]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:36:58.760372 waagent[1494]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:11:fe brd ff:ff:ff:ff:ff:ff Feb 9 19:36:58.760372 waagent[1494]: 3: enP44527s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:11:fe brd ff:ff:ff:ff:ff:ff\ altname enP44527p0s2 Feb 9 19:36:58.760372 waagent[1494]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:36:58.760372 waagent[1494]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:36:58.760372 waagent[1494]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:36:58.760372 waagent[1494]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:36:58.760372 waagent[1494]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:36:58.760372 waagent[1494]: 2: eth0 inet6 fe80::20d:3aff:feb9:11fe/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:36:59.002630 waagent[1494]: 2024-02-09T19:36:59.002533Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:36:59.052880 waagent[1434]: 2024-02-09T19:36:59.052735Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:36:59.058038 waagent[1434]: 2024-02-09T19:36:59.057968Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:37:00.091419 waagent[1532]: 2024-02-09T19:37:00.091288Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:37:00.092170 waagent[1532]: 2024-02-09T19:37:00.092099Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:37:00.092317 waagent[1532]: 2024-02-09T19:37:00.092262Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:37:00.102167 waagent[1532]: 2024-02-09T19:37:00.102059Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:37:00.102564 waagent[1532]: 2024-02-09T19:37:00.102498Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:37:00.102753 waagent[1532]: 2024-02-09T19:37:00.102701Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:37:00.114454 waagent[1532]: 2024-02-09T19:37:00.114376Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:37:00.122889 waagent[1532]: 2024-02-09T19:37:00.122827Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:37:00.123863 waagent[1532]: 2024-02-09T19:37:00.123799Z INFO ExtHandler Feb 9 19:37:00.124012 waagent[1532]: 2024-02-09T19:37:00.123960Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c5b36299-9a90-4e00-a903-f1f240332b2a eTag: 17668696400325882315 source: Fabric] Feb 9 19:37:00.124728 waagent[1532]: 2024-02-09T19:37:00.124669Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:37:00.125811 waagent[1532]: 2024-02-09T19:37:00.125749Z INFO ExtHandler Feb 9 19:37:00.125948 waagent[1532]: 2024-02-09T19:37:00.125895Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:37:00.132556 waagent[1532]: 2024-02-09T19:37:00.132493Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:37:00.132995 waagent[1532]: 2024-02-09T19:37:00.132946Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:37:00.150996 waagent[1532]: 2024-02-09T19:37:00.150933Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:37:00.215303 waagent[1532]: 2024-02-09T19:37:00.215157Z INFO ExtHandler Downloaded certificate {'thumbprint': '8CF2FF359DE4338B08713FA0206C874E66CE1536', 'hasPrivateKey': True} Feb 9 19:37:00.216346 waagent[1532]: 2024-02-09T19:37:00.216274Z INFO ExtHandler Downloaded certificate {'thumbprint': '1BCF02D96DB45F53111815FF299128C4660DA5BB', 'hasPrivateKey': False} Feb 9 19:37:00.217329 waagent[1532]: 2024-02-09T19:37:00.217264Z INFO ExtHandler Fetch goal state completed Feb 9 19:37:00.238141 waagent[1532]: 2024-02-09T19:37:00.238040Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1532 Feb 9 19:37:00.241568 waagent[1532]: 2024-02-09T19:37:00.241471Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:37:00.243077 waagent[1532]: 2024-02-09T19:37:00.243011Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:37:00.248681 waagent[1532]: 2024-02-09T19:37:00.248617Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:37:00.249068 waagent[1532]: 2024-02-09T19:37:00.249009Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:37:00.257455 waagent[1532]: 2024-02-09T19:37:00.257398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:37:00.257962 waagent[1532]: 2024-02-09T19:37:00.257901Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:37:00.264209 waagent[1532]: 2024-02-09T19:37:00.264110Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:37:00.268916 waagent[1532]: 2024-02-09T19:37:00.268856Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:37:00.270364 waagent[1532]: 2024-02-09T19:37:00.270301Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:37:00.270802 waagent[1532]: 2024-02-09T19:37:00.270743Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:37:00.271142 waagent[1532]: 2024-02-09T19:37:00.271086Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:37:00.271693 waagent[1532]: 2024-02-09T19:37:00.271632Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:37:00.271981 waagent[1532]: 2024-02-09T19:37:00.271923Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:37:00.271981 waagent[1532]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:37:00.271981 waagent[1532]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:37:00.271981 waagent[1532]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:37:00.271981 waagent[1532]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:37:00.271981 waagent[1532]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:37:00.271981 waagent[1532]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:37:00.274311 waagent[1532]: 2024-02-09T19:37:00.274217Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:37:00.275204 waagent[1532]: 2024-02-09T19:37:00.275132Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:37:00.275369 waagent[1532]: 2024-02-09T19:37:00.275291Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:37:00.275456 waagent[1532]: 2024-02-09T19:37:00.275399Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:37:00.276961 waagent[1532]: 2024-02-09T19:37:00.276902Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:37:00.277326 waagent[1532]: 2024-02-09T19:37:00.277270Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:37:00.277698 waagent[1532]: 2024-02-09T19:37:00.277646Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:37:00.282089 waagent[1532]: 2024-02-09T19:37:00.282030Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:37:00.283229 waagent[1532]: 2024-02-09T19:37:00.283166Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:37:00.285863 waagent[1532]: 2024-02-09T19:37:00.285661Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:37:00.286323 waagent[1532]: 2024-02-09T19:37:00.286257Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:37:00.287767 waagent[1532]: 2024-02-09T19:37:00.287709Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:37:00.287767 waagent[1532]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:37:00.287767 waagent[1532]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:37:00.287767 waagent[1532]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:11:fe brd ff:ff:ff:ff:ff:ff Feb 9 19:37:00.287767 waagent[1532]: 3: enP44527s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:11:fe brd ff:ff:ff:ff:ff:ff\ altname enP44527p0s2 Feb 9 19:37:00.287767 waagent[1532]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:37:00.287767 waagent[1532]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:37:00.287767 waagent[1532]: 2: eth0 inet 10.200.8.13/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:37:00.287767 waagent[1532]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:37:00.287767 waagent[1532]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:37:00.287767 waagent[1532]: 2: eth0 inet6 fe80::20d:3aff:feb9:11fe/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:37:00.307412 waagent[1532]: 2024-02-09T19:37:00.307287Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:37:00.308502 waagent[1532]: 2024-02-09T19:37:00.308431Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:37:00.390636 waagent[1532]: 2024-02-09T19:37:00.390565Z INFO ExtHandler ExtHandler Feb 9 19:37:00.393642 waagent[1532]: 2024-02-09T19:37:00.393345Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d388e0f6-f2be-43b0-8ba1-b3cdd9ed235c correlation 5fdb89b5-e094-445e-ad33-c9338a32198b created: 2024-02-09T19:35:39.357018Z] Feb 9 19:37:00.395270 waagent[1532]: 2024-02-09T19:37:00.395188Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:37:00.397386 waagent[1532]: 2024-02-09T19:37:00.397324Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Feb 9 19:37:00.418724 waagent[1532]: 2024-02-09T19:37:00.418607Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:37:00.418724 waagent[1532]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.418724 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.418724 waagent[1532]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.418724 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.418724 waagent[1532]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.418724 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.418724 waagent[1532]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:37:00.418724 waagent[1532]: 9 3234 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:37:00.418724 waagent[1532]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:37:00.426282 waagent[1532]: 2024-02-09T19:37:00.426165Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:37:00.426282 waagent[1532]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.426282 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.426282 waagent[1532]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.426282 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.426282 waagent[1532]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:37:00.426282 waagent[1532]: pkts bytes target prot opt in out source destination Feb 9 19:37:00.426282 waagent[1532]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:37:00.426282 waagent[1532]: 9 3234 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:37:00.426282 waagent[1532]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:37:00.426901 waagent[1532]: 2024-02-09T19:37:00.426842Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:37:00.429856 waagent[1532]: 2024-02-09T19:37:00.429791Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:37:00.439464 waagent[1532]: 2024-02-09T19:37:00.439379Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 589AE83F-FDB7-4DEB-A585-B87FCE364F59;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:37:28.606974 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:37:33.960881 update_engine[1328]: I0209 19:37:33.960783 1328 update_attempter.cc:509] Updating boot flags... Feb 9 19:37:46.890312 systemd[1]: Created slice system-sshd.slice. Feb 9 19:37:46.892341 systemd[1]: Started sshd@0-10.200.8.13:22-10.200.12.6:59914.service. Feb 9 19:37:47.683640 sshd[1646]: Accepted publickey for core from 10.200.12.6 port 59914 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:37:47.685455 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:37:47.691168 systemd[1]: Started session-3.scope. Feb 9 19:37:47.691797 systemd-logind[1327]: New session 3 of user core. Feb 9 19:37:48.231527 systemd[1]: Started sshd@1-10.200.8.13:22-10.200.12.6:54594.service. Feb 9 19:37:48.849080 sshd[1651]: Accepted publickey for core from 10.200.12.6 port 54594 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:37:48.850946 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:37:48.856662 systemd-logind[1327]: New session 4 of user core. Feb 9 19:37:48.856913 systemd[1]: Started session-4.scope. Feb 9 19:37:49.297852 sshd[1651]: pam_unix(sshd:session): session closed for user core Feb 9 19:37:49.301129 systemd[1]: sshd@1-10.200.8.13:22-10.200.12.6:54594.service: Deactivated successfully. Feb 9 19:37:49.302331 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:37:49.302516 systemd-logind[1327]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:37:49.303426 systemd-logind[1327]: Removed session 4. Feb 9 19:37:49.402987 systemd[1]: Started sshd@2-10.200.8.13:22-10.200.12.6:54598.service. Feb 9 19:37:50.019581 sshd[1657]: Accepted publickey for core from 10.200.12.6 port 54598 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:37:50.021338 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:37:50.027345 systemd[1]: Started session-5.scope. Feb 9 19:37:50.027953 systemd-logind[1327]: New session 5 of user core. Feb 9 19:37:50.456142 sshd[1657]: pam_unix(sshd:session): session closed for user core Feb 9 19:37:50.459655 systemd[1]: sshd@2-10.200.8.13:22-10.200.12.6:54598.service: Deactivated successfully. Feb 9 19:37:50.460706 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:37:50.461392 systemd-logind[1327]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:37:50.462164 systemd-logind[1327]: Removed session 5. Feb 9 19:37:50.561428 systemd[1]: Started sshd@3-10.200.8.13:22-10.200.12.6:54608.service. Feb 9 19:37:51.178954 sshd[1663]: Accepted publickey for core from 10.200.12.6 port 54608 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:37:51.180742 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:37:51.186684 systemd[1]: Started session-6.scope. Feb 9 19:37:51.187141 systemd-logind[1327]: New session 6 of user core. Feb 9 19:37:51.619447 sshd[1663]: pam_unix(sshd:session): session closed for user core Feb 9 19:37:51.623230 systemd[1]: sshd@3-10.200.8.13:22-10.200.12.6:54608.service: Deactivated successfully. Feb 9 19:37:51.624299 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:37:51.625126 systemd-logind[1327]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:37:51.626043 systemd-logind[1327]: Removed session 6. Feb 9 19:37:51.722207 systemd[1]: Started sshd@4-10.200.8.13:22-10.200.12.6:54618.service. Feb 9 19:37:52.334315 sshd[1669]: Accepted publickey for core from 10.200.12.6 port 54618 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:37:52.336064 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:37:52.341666 systemd[1]: Started session-7.scope. Feb 9 19:37:52.342285 systemd-logind[1327]: New session 7 of user core. Feb 9 19:37:52.827250 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:37:52.827618 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:37:54.002036 systemd[1]: Starting docker.service... Feb 9 19:37:54.054196 env[1687]: time="2024-02-09T19:37:54.054125847Z" level=info msg="Starting up" Feb 9 19:37:54.055497 env[1687]: time="2024-02-09T19:37:54.055457448Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:37:54.055497 env[1687]: time="2024-02-09T19:37:54.055481648Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:37:54.056263 env[1687]: time="2024-02-09T19:37:54.055504048Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:37:54.056263 env[1687]: time="2024-02-09T19:37:54.055517648Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:37:54.057677 env[1687]: time="2024-02-09T19:37:54.057648251Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:37:54.057677 env[1687]: time="2024-02-09T19:37:54.057666751Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:37:54.057828 env[1687]: time="2024-02-09T19:37:54.057685951Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:37:54.057828 env[1687]: time="2024-02-09T19:37:54.057697751Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:37:54.178113 env[1687]: time="2024-02-09T19:37:54.178064585Z" level=info msg="Loading containers: start." Feb 9 19:37:54.360654 kernel: Initializing XFRM netlink socket Feb 9 19:37:54.402367 env[1687]: time="2024-02-09T19:37:54.402305736Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:37:54.511580 systemd-networkd[1485]: docker0: Link UP Feb 9 19:37:54.530238 env[1687]: time="2024-02-09T19:37:54.530188679Z" level=info msg="Loading containers: done." Feb 9 19:37:54.541799 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck569973662-merged.mount: Deactivated successfully. Feb 9 19:37:54.554311 env[1687]: time="2024-02-09T19:37:54.554260306Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:37:54.554522 env[1687]: time="2024-02-09T19:37:54.554492407Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:37:54.554654 env[1687]: time="2024-02-09T19:37:54.554632407Z" level=info msg="Daemon has completed initialization" Feb 9 19:37:54.583610 systemd[1]: Started docker.service. Feb 9 19:37:54.589788 env[1687]: time="2024-02-09T19:37:54.589727846Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:37:54.609796 systemd[1]: Reloading. Feb 9 19:37:54.704362 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-02-09T19:37:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:37:54.704402 /usr/lib/systemd/system-generators/torcx-generator[1816]: time="2024-02-09T19:37:54Z" level=info msg="torcx already run" Feb 9 19:37:54.791136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:37:54.791156 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:37:54.809380 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:37:54.895758 systemd[1]: Started kubelet.service. Feb 9 19:37:54.971361 kubelet[1879]: E0209 19:37:54.970897 1879 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:37:54.973195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:37:54.973373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:37:58.435001 env[1342]: time="2024-02-09T19:37:58.434488937Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 19:37:59.125202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2604345455.mount: Deactivated successfully. Feb 9 19:38:01.180826 env[1342]: time="2024-02-09T19:38:01.180754311Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:01.186322 env[1342]: time="2024-02-09T19:38:01.186264730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:01.190346 env[1342]: time="2024-02-09T19:38:01.190297716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:01.195085 env[1342]: time="2024-02-09T19:38:01.195043719Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:01.195704 env[1342]: time="2024-02-09T19:38:01.195664132Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 19:38:01.206402 env[1342]: time="2024-02-09T19:38:01.206356162Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 19:38:03.249171 env[1342]: time="2024-02-09T19:38:03.249095055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:03.256797 env[1342]: time="2024-02-09T19:38:03.256745110Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:03.261953 env[1342]: time="2024-02-09T19:38:03.261907515Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:03.264843 env[1342]: time="2024-02-09T19:38:03.264791374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:03.266440 env[1342]: time="2024-02-09T19:38:03.266388107Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 19:38:03.283155 env[1342]: time="2024-02-09T19:38:03.283109747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:38:04.569557 env[1342]: time="2024-02-09T19:38:04.569481028Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:04.574278 env[1342]: time="2024-02-09T19:38:04.574229422Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:04.578040 env[1342]: time="2024-02-09T19:38:04.578004797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:04.582155 env[1342]: time="2024-02-09T19:38:04.582123378Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:04.582776 env[1342]: time="2024-02-09T19:38:04.582744991Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 19:38:04.596883 env[1342]: time="2024-02-09T19:38:04.596840070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:38:05.141020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:38:05.141322 systemd[1]: Stopped kubelet.service. Feb 9 19:38:05.143621 systemd[1]: Started kubelet.service. Feb 9 19:38:05.238522 kubelet[1915]: E0209 19:38:05.238463 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:38:05.245070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:38:05.245231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:38:05.606328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982052955.mount: Deactivated successfully. Feb 9 19:38:06.206521 env[1342]: time="2024-02-09T19:38:06.206455386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.225290 env[1342]: time="2024-02-09T19:38:06.225222638Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.231540 env[1342]: time="2024-02-09T19:38:06.231486955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.237402 env[1342]: time="2024-02-09T19:38:06.237349765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.237777 env[1342]: time="2024-02-09T19:38:06.237740873Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:38:06.248107 env[1342]: time="2024-02-09T19:38:06.248059366Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:38:06.713622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4170504286.mount: Deactivated successfully. Feb 9 19:38:06.740123 env[1342]: time="2024-02-09T19:38:06.740068886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.749435 env[1342]: time="2024-02-09T19:38:06.749373660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.753675 env[1342]: time="2024-02-09T19:38:06.753623340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.760149 env[1342]: time="2024-02-09T19:38:06.760100461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:06.760545 env[1342]: time="2024-02-09T19:38:06.760510969Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:38:06.771264 env[1342]: time="2024-02-09T19:38:06.771215869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 19:38:07.286184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932809602.mount: Deactivated successfully. Feb 9 19:38:11.909865 env[1342]: time="2024-02-09T19:38:11.909601897Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:11.919199 env[1342]: time="2024-02-09T19:38:11.919144554Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:11.926726 env[1342]: time="2024-02-09T19:38:11.926676277Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:11.931945 env[1342]: time="2024-02-09T19:38:11.931895062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:11.932663 env[1342]: time="2024-02-09T19:38:11.932624274Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 19:38:11.943104 env[1342]: time="2024-02-09T19:38:11.943066045Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:38:12.498294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2308379314.mount: Deactivated successfully. Feb 9 19:38:13.128806 env[1342]: time="2024-02-09T19:38:13.128736704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:13.134544 env[1342]: time="2024-02-09T19:38:13.134497093Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:13.138984 env[1342]: time="2024-02-09T19:38:13.138944462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:13.142716 env[1342]: time="2024-02-09T19:38:13.142686820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:13.143154 env[1342]: time="2024-02-09T19:38:13.143125127Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:38:15.390936 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:38:15.391215 systemd[1]: Stopped kubelet.service. Feb 9 19:38:15.394052 systemd[1]: Started kubelet.service. Feb 9 19:38:15.471666 kubelet[1994]: E0209 19:38:15.471598 1994 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:38:15.473651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:38:15.473811 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:38:16.588240 systemd[1]: Stopped kubelet.service. Feb 9 19:38:16.604727 systemd[1]: Reloading. Feb 9 19:38:16.700194 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:38:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:38:16.700238 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:38:16Z" level=info msg="torcx already run" Feb 9 19:38:16.777331 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:38:16.777353 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:38:16.795543 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:38:16.887647 systemd[1]: Started kubelet.service. Feb 9 19:38:16.942784 kubelet[2087]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:38:16.943169 kubelet[2087]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:38:16.943169 kubelet[2087]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:38:16.943296 kubelet[2087]: I0209 19:38:16.943235 2087 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:38:17.316223 kubelet[2087]: I0209 19:38:17.316182 2087 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:38:17.316223 kubelet[2087]: I0209 19:38:17.316216 2087 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:38:17.316508 kubelet[2087]: I0209 19:38:17.316487 2087 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:38:17.321696 kubelet[2087]: I0209 19:38:17.321665 2087 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:38:17.322091 kubelet[2087]: E0209 19:38:17.322070 2087 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.327206 kubelet[2087]: I0209 19:38:17.327183 2087 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:38:17.327576 kubelet[2087]: I0209 19:38:17.327545 2087 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:38:17.327812 kubelet[2087]: I0209 19:38:17.327798 2087 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:38:17.327963 kubelet[2087]: I0209 19:38:17.327955 2087 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:38:17.328020 kubelet[2087]: I0209 19:38:17.328012 2087 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:38:17.328160 kubelet[2087]: I0209 19:38:17.328151 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:38:17.328299 kubelet[2087]: I0209 19:38:17.328289 2087 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:38:17.328381 kubelet[2087]: I0209 19:38:17.328372 2087 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:38:17.328465 kubelet[2087]: I0209 19:38:17.328456 2087 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:38:17.328536 kubelet[2087]: I0209 19:38:17.328528 2087 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:38:17.329676 kubelet[2087]: I0209 19:38:17.329659 2087 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:38:17.330054 kubelet[2087]: W0209 19:38:17.330039 2087 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:38:17.330726 kubelet[2087]: I0209 19:38:17.330709 2087 server.go:1232] "Started kubelet" Feb 9 19:38:17.330962 kubelet[2087]: W0209 19:38:17.330924 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.331067 kubelet[2087]: E0209 19:38:17.331057 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.331233 kubelet[2087]: W0209 19:38:17.331200 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4c52a92a5f&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.331319 kubelet[2087]: E0209 19:38:17.331310 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4c52a92a5f&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.339569 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:38:17.339808 kubelet[2087]: I0209 19:38:17.339791 2087 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:38:17.341075 kubelet[2087]: E0209 19:38:17.341044 2087 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:38:17.341165 kubelet[2087]: E0209 19:38:17.341086 2087 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:38:17.344018 kubelet[2087]: I0209 19:38:17.342585 2087 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:38:17.344718 kubelet[2087]: I0209 19:38:17.344701 2087 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:38:17.344914 kubelet[2087]: I0209 19:38:17.344895 2087 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:38:17.345325 kubelet[2087]: I0209 19:38:17.345305 2087 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:38:17.345477 kubelet[2087]: I0209 19:38:17.345465 2087 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:38:17.346172 kubelet[2087]: I0209 19:38:17.346147 2087 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:38:17.346357 kubelet[2087]: I0209 19:38:17.346337 2087 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:38:17.347125 kubelet[2087]: W0209 19:38:17.347084 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.347285 kubelet[2087]: E0209 19:38:17.347270 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.347475 kubelet[2087]: E0209 19:38:17.347460 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="200ms" Feb 9 19:38:17.347856 kubelet[2087]: E0209 19:38:17.347753 2087 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-4c52a92a5f.17b24905a5b623cc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-4c52a92a5f", UID:"ci-3510.3.2-a-4c52a92a5f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4c52a92a5f"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 38, 17, 330680780, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 38, 17, 330680780, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-4c52a92a5f"}': 'Post "https://10.200.8.13:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.13:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:38:17.389076 kubelet[2087]: I0209 19:38:17.389047 2087 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:38:17.389076 kubelet[2087]: I0209 19:38:17.389072 2087 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:38:17.389316 kubelet[2087]: I0209 19:38:17.389092 2087 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:38:17.396875 kubelet[2087]: I0209 19:38:17.396840 2087 policy_none.go:49] "None policy: Start" Feb 9 19:38:17.397502 kubelet[2087]: I0209 19:38:17.397476 2087 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:38:17.397502 kubelet[2087]: I0209 19:38:17.397502 2087 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:38:17.405069 systemd[1]: Created slice kubepods.slice. Feb 9 19:38:17.409853 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:38:17.419227 kubelet[2087]: I0209 19:38:17.419204 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:38:17.421667 kubelet[2087]: I0209 19:38:17.421328 2087 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:38:17.421667 kubelet[2087]: I0209 19:38:17.421358 2087 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:38:17.421667 kubelet[2087]: I0209 19:38:17.421392 2087 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:38:17.421667 kubelet[2087]: E0209 19:38:17.421446 2087 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:38:17.421497 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:38:17.424368 kubelet[2087]: W0209 19:38:17.424321 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.424513 kubelet[2087]: E0209 19:38:17.424500 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:17.424776 kubelet[2087]: I0209 19:38:17.424761 2087 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:38:17.425124 kubelet[2087]: I0209 19:38:17.425109 2087 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:38:17.427358 kubelet[2087]: E0209 19:38:17.427335 2087 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-4c52a92a5f\" not found" Feb 9 19:38:17.446823 kubelet[2087]: I0209 19:38:17.446790 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.447155 kubelet[2087]: E0209 19:38:17.447133 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.521705 kubelet[2087]: I0209 19:38:17.521648 2087 topology_manager.go:215] "Topology Admit Handler" podUID="1d72a535e955621138d125e1fb678409" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.523704 kubelet[2087]: I0209 19:38:17.523678 2087 topology_manager.go:215] "Topology Admit Handler" podUID="ffe9a06f978f0b72085ec7397b9225d3" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.524957 kubelet[2087]: I0209 19:38:17.524931 2087 topology_manager.go:215] "Topology Admit Handler" podUID="f13b63b48a3b536819b981c8780ce27e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.531649 systemd[1]: Created slice kubepods-burstable-pod1d72a535e955621138d125e1fb678409.slice. Feb 9 19:38:17.541610 systemd[1]: Created slice kubepods-burstable-podffe9a06f978f0b72085ec7397b9225d3.slice. Feb 9 19:38:17.546653 kubelet[2087]: I0209 19:38:17.546628 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.546899 kubelet[2087]: I0209 19:38:17.546879 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.546983 kubelet[2087]: I0209 19:38:17.546936 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.546983 kubelet[2087]: I0209 19:38:17.546967 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.547077 kubelet[2087]: I0209 19:38:17.547019 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.547077 kubelet[2087]: I0209 19:38:17.547051 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.547156 kubelet[2087]: I0209 19:38:17.547093 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.547156 kubelet[2087]: I0209 19:38:17.547123 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.547253 kubelet[2087]: I0209 19:38:17.547169 2087 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d72a535e955621138d125e1fb678409-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4c52a92a5f\" (UID: \"1d72a535e955621138d125e1fb678409\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.549242 systemd[1]: Created slice kubepods-burstable-podf13b63b48a3b536819b981c8780ce27e.slice. Feb 9 19:38:17.549979 kubelet[2087]: E0209 19:38:17.549962 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="400ms" Feb 9 19:38:17.649451 kubelet[2087]: I0209 19:38:17.649418 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.649816 kubelet[2087]: E0209 19:38:17.649795 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:17.841006 env[1342]: time="2024-02-09T19:38:17.840951804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4c52a92a5f,Uid:1d72a535e955621138d125e1fb678409,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:17.845341 env[1342]: time="2024-02-09T19:38:17.845298465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4c52a92a5f,Uid:ffe9a06f978f0b72085ec7397b9225d3,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:17.852824 env[1342]: time="2024-02-09T19:38:17.852580566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4c52a92a5f,Uid:f13b63b48a3b536819b981c8780ce27e,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:17.951241 kubelet[2087]: E0209 19:38:17.951099 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="800ms" Feb 9 19:38:18.051942 kubelet[2087]: I0209 19:38:18.051910 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:18.052306 kubelet[2087]: E0209 19:38:18.052277 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:18.393376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791578898.mount: Deactivated successfully. Feb 9 19:38:18.436024 env[1342]: time="2024-02-09T19:38:18.435964256Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.458118 kubelet[2087]: W0209 19:38:18.458073 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.458118 kubelet[2087]: E0209 19:38:18.458116 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.460230 env[1342]: time="2024-02-09T19:38:18.460176485Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.464069 env[1342]: time="2024-02-09T19:38:18.464028538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.469022 env[1342]: time="2024-02-09T19:38:18.468977005Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.480084 env[1342]: time="2024-02-09T19:38:18.480030956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.483833 env[1342]: time="2024-02-09T19:38:18.483777307Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.488341 env[1342]: time="2024-02-09T19:38:18.488304668Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.491043 env[1342]: time="2024-02-09T19:38:18.491004005Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.495067 env[1342]: time="2024-02-09T19:38:18.495031360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.497874 env[1342]: time="2024-02-09T19:38:18.497841298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.504431 env[1342]: time="2024-02-09T19:38:18.504398387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.511353 env[1342]: time="2024-02-09T19:38:18.511315681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:18.516751 kubelet[2087]: W0209 19:38:18.516693 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.516860 kubelet[2087]: E0209 19:38:18.516758 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.555426 env[1342]: time="2024-02-09T19:38:18.555219678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:18.555426 env[1342]: time="2024-02-09T19:38:18.555269679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:18.555426 env[1342]: time="2024-02-09T19:38:18.555281079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:18.555743 env[1342]: time="2024-02-09T19:38:18.555486582Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369 pid=2126 runtime=io.containerd.runc.v2 Feb 9 19:38:18.572733 systemd[1]: Started cri-containerd-5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369.scope. Feb 9 19:38:18.596587 env[1342]: time="2024-02-09T19:38:18.596458539Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:18.596801 env[1342]: time="2024-02-09T19:38:18.596596141Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:18.596801 env[1342]: time="2024-02-09T19:38:18.596650142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:18.596915 env[1342]: time="2024-02-09T19:38:18.596831345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8734efbf547d6e6d53dfff3973b9eb7566cd64410a61c4b839f794ad614c8df pid=2154 runtime=io.containerd.runc.v2 Feb 9 19:38:18.602354 env[1342]: time="2024-02-09T19:38:18.601671310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:18.602354 env[1342]: time="2024-02-09T19:38:18.601712911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:18.602354 env[1342]: time="2024-02-09T19:38:18.601726811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:18.602354 env[1342]: time="2024-02-09T19:38:18.601850613Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658 pid=2163 runtime=io.containerd.runc.v2 Feb 9 19:38:18.624565 systemd[1]: Started cri-containerd-76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658.scope. Feb 9 19:38:18.638705 systemd[1]: Started cri-containerd-f8734efbf547d6e6d53dfff3973b9eb7566cd64410a61c4b839f794ad614c8df.scope. Feb 9 19:38:18.665377 kubelet[2087]: W0209 19:38:18.665226 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4c52a92a5f&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.665619 kubelet[2087]: E0209 19:38:18.665598 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-4c52a92a5f&limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:18.691314 env[1342]: time="2024-02-09T19:38:18.691255329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-4c52a92a5f,Uid:1d72a535e955621138d125e1fb678409,Namespace:kube-system,Attempt:0,} returns sandbox id \"5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369\"" Feb 9 19:38:18.696416 env[1342]: time="2024-02-09T19:38:18.696360699Z" level=info msg="CreateContainer within sandbox \"5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:38:18.727630 env[1342]: time="2024-02-09T19:38:18.727520822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-4c52a92a5f,Uid:ffe9a06f978f0b72085ec7397b9225d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8734efbf547d6e6d53dfff3973b9eb7566cd64410a61c4b839f794ad614c8df\"" Feb 9 19:38:18.728379 env[1342]: time="2024-02-09T19:38:18.728249732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-4c52a92a5f,Uid:f13b63b48a3b536819b981c8780ce27e,Namespace:kube-system,Attempt:0,} returns sandbox id \"76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658\"" Feb 9 19:38:18.732003 env[1342]: time="2024-02-09T19:38:18.731609078Z" level=info msg="CreateContainer within sandbox \"f8734efbf547d6e6d53dfff3973b9eb7566cd64410a61c4b839f794ad614c8df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:38:18.735360 env[1342]: time="2024-02-09T19:38:18.735335429Z" level=info msg="CreateContainer within sandbox \"76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:38:18.739427 env[1342]: time="2024-02-09T19:38:18.739400284Z" level=info msg="CreateContainer within sandbox \"5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777\"" Feb 9 19:38:18.745925 env[1342]: time="2024-02-09T19:38:18.745876872Z" level=info msg="StartContainer for \"8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777\"" Feb 9 19:38:18.755184 kubelet[2087]: E0209 19:38:18.755114 2087 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": dial tcp 10.200.8.13:6443: connect: connection refused" interval="1.6s" Feb 9 19:38:18.771784 systemd[1]: Started cri-containerd-8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777.scope. Feb 9 19:38:18.798204 env[1342]: time="2024-02-09T19:38:18.798158583Z" level=info msg="CreateContainer within sandbox \"f8734efbf547d6e6d53dfff3973b9eb7566cd64410a61c4b839f794ad614c8df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f874e745d6184fb9eb47c0f683b7b5ca54ebf7326a6aad465c32911ee0513161\"" Feb 9 19:38:18.803131 env[1342]: time="2024-02-09T19:38:18.803089551Z" level=info msg="StartContainer for \"f874e745d6184fb9eb47c0f683b7b5ca54ebf7326a6aad465c32911ee0513161\"" Feb 9 19:38:18.805383 env[1342]: time="2024-02-09T19:38:18.805348581Z" level=info msg="CreateContainer within sandbox \"76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a\"" Feb 9 19:38:18.806189 env[1342]: time="2024-02-09T19:38:18.806153292Z" level=info msg="StartContainer for \"c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a\"" Feb 9 19:38:18.833078 systemd[1]: Started cri-containerd-f874e745d6184fb9eb47c0f683b7b5ca54ebf7326a6aad465c32911ee0513161.scope. Feb 9 19:38:18.848109 systemd[1]: Started cri-containerd-c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a.scope. Feb 9 19:38:18.854797 kubelet[2087]: I0209 19:38:18.854755 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:18.855244 kubelet[2087]: E0209 19:38:18.855210 2087 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.13:6443/api/v1/nodes\": dial tcp 10.200.8.13:6443: connect: connection refused" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:18.876107 env[1342]: time="2024-02-09T19:38:18.876041043Z" level=info msg="StartContainer for \"8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777\" returns successfully" Feb 9 19:38:18.920280 env[1342]: time="2024-02-09T19:38:18.920147343Z" level=info msg="StartContainer for \"f874e745d6184fb9eb47c0f683b7b5ca54ebf7326a6aad465c32911ee0513161\" returns successfully" Feb 9 19:38:18.971767 env[1342]: time="2024-02-09T19:38:18.971696744Z" level=info msg="StartContainer for \"c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a\" returns successfully" Feb 9 19:38:19.007046 kubelet[2087]: W0209 19:38:19.006946 2087 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:19.007046 kubelet[2087]: E0209 19:38:19.007045 2087 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.13:6443: connect: connection refused Feb 9 19:38:20.457623 kubelet[2087]: I0209 19:38:20.457588 2087 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:21.656788 kubelet[2087]: E0209 19:38:21.656750 2087 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-4c52a92a5f\" not found" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:21.662887 kubelet[2087]: I0209 19:38:21.662851 2087 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:21.712856 kubelet[2087]: E0209 19:38:21.712663 2087 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-4c52a92a5f.17b24905a5b623cc", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-4c52a92a5f", UID:"ci-3510.3.2-a-4c52a92a5f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4c52a92a5f"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 38, 17, 330680780, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 38, 17, 330680780, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-4c52a92a5f"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:38:22.332474 kubelet[2087]: I0209 19:38:22.332407 2087 apiserver.go:52] "Watching apiserver" Feb 9 19:38:22.345933 kubelet[2087]: I0209 19:38:22.345877 2087 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:38:23.196023 kubelet[2087]: W0209 19:38:23.195986 2087 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:24.484283 systemd[1]: Reloading. Feb 9 19:38:24.595254 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2024-02-09T19:38:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:38:24.595297 /usr/lib/systemd/system-generators/torcx-generator[2384]: time="2024-02-09T19:38:24Z" level=info msg="torcx already run" Feb 9 19:38:24.698681 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:38:24.698705 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:38:24.729962 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:38:24.883139 kubelet[2087]: I0209 19:38:24.881193 2087 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:38:24.884182 systemd[1]: Stopping kubelet.service... Feb 9 19:38:24.894253 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:38:24.894666 systemd[1]: Stopped kubelet.service. Feb 9 19:38:24.897093 systemd[1]: Started kubelet.service. Feb 9 19:38:24.985612 kubelet[2444]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:38:24.985612 kubelet[2444]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:38:24.985612 kubelet[2444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:38:24.986106 kubelet[2444]: I0209 19:38:24.985648 2444 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:38:24.990258 kubelet[2444]: I0209 19:38:24.990216 2444 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:38:24.990258 kubelet[2444]: I0209 19:38:24.990243 2444 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:38:24.990523 kubelet[2444]: I0209 19:38:24.990503 2444 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:38:24.992189 kubelet[2444]: I0209 19:38:24.992161 2444 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:38:24.993382 kubelet[2444]: I0209 19:38:24.993346 2444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:38:25.000039 kubelet[2444]: I0209 19:38:25.000017 2444 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:38:25.000819 kubelet[2444]: I0209 19:38:25.000804 2444 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:38:25.001054 kubelet[2444]: I0209 19:38:25.001042 2444 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:38:25.001180 kubelet[2444]: I0209 19:38:25.001172 2444 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:38:25.001227 kubelet[2444]: I0209 19:38:25.001221 2444 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:38:25.001297 kubelet[2444]: I0209 19:38:25.001291 2444 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:38:25.001426 kubelet[2444]: I0209 19:38:25.001416 2444 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:38:25.001497 kubelet[2444]: I0209 19:38:25.001488 2444 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:38:25.001595 kubelet[2444]: I0209 19:38:25.001585 2444 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:38:25.001683 kubelet[2444]: I0209 19:38:25.001674 2444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:38:25.011360 sudo[2456]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:38:25.012048 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:38:25.014659 kubelet[2444]: I0209 19:38:25.014477 2444 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:38:25.015311 kubelet[2444]: I0209 19:38:25.015148 2444 server.go:1232] "Started kubelet" Feb 9 19:38:25.033021 kubelet[2444]: I0209 19:38:25.032987 2444 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:38:25.038936 kubelet[2444]: E0209 19:38:25.038903 2444 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:38:25.038936 kubelet[2444]: E0209 19:38:25.038945 2444 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:38:25.040481 kubelet[2444]: I0209 19:38:25.040455 2444 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:38:25.047806 kubelet[2444]: I0209 19:38:25.045655 2444 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:38:25.047806 kubelet[2444]: I0209 19:38:25.045880 2444 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:38:25.048136 kubelet[2444]: I0209 19:38:25.048123 2444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:38:25.052429 kubelet[2444]: I0209 19:38:25.049784 2444 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:38:25.052429 kubelet[2444]: I0209 19:38:25.050053 2444 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:38:25.052429 kubelet[2444]: I0209 19:38:25.050193 2444 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:38:25.075967 kubelet[2444]: I0209 19:38:25.075929 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:38:25.077124 kubelet[2444]: I0209 19:38:25.077097 2444 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:38:25.077262 kubelet[2444]: I0209 19:38:25.077135 2444 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:38:25.077262 kubelet[2444]: I0209 19:38:25.077158 2444 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:38:25.077262 kubelet[2444]: E0209 19:38:25.077219 2444 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:38:25.147250 kubelet[2444]: I0209 19:38:25.147219 2444 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:38:25.147250 kubelet[2444]: I0209 19:38:25.147248 2444 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:38:25.147250 kubelet[2444]: I0209 19:38:25.147267 2444 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:38:25.147541 kubelet[2444]: I0209 19:38:25.147511 2444 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:38:25.147541 kubelet[2444]: I0209 19:38:25.147545 2444 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 19:38:25.147541 kubelet[2444]: I0209 19:38:25.147569 2444 policy_none.go:49] "None policy: Start" Feb 9 19:38:25.148316 kubelet[2444]: I0209 19:38:25.148292 2444 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:38:25.148316 kubelet[2444]: I0209 19:38:25.148320 2444 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:38:25.148514 kubelet[2444]: I0209 19:38:25.148500 2444 state_mem.go:75] "Updated machine memory state" Feb 9 19:38:25.153997 kubelet[2444]: I0209 19:38:25.153936 2444 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.158522 kubelet[2444]: I0209 19:38:25.158500 2444 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:38:25.159588 kubelet[2444]: I0209 19:38:25.159568 2444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:38:25.167446 kubelet[2444]: I0209 19:38:25.167416 2444 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.167616 kubelet[2444]: I0209 19:38:25.167512 2444 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.177323 kubelet[2444]: I0209 19:38:25.177291 2444 topology_manager.go:215] "Topology Admit Handler" podUID="ffe9a06f978f0b72085ec7397b9225d3" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.177473 kubelet[2444]: I0209 19:38:25.177430 2444 topology_manager.go:215] "Topology Admit Handler" podUID="f13b63b48a3b536819b981c8780ce27e" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.177528 kubelet[2444]: I0209 19:38:25.177476 2444 topology_manager.go:215] "Topology Admit Handler" podUID="1d72a535e955621138d125e1fb678409" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.193226 kubelet[2444]: W0209 19:38:25.188980 2444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:25.198062 kubelet[2444]: W0209 19:38:25.198032 2444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:25.202382 kubelet[2444]: W0209 19:38:25.202354 2444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:25.202504 kubelet[2444]: E0209 19:38:25.202465 2444 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.351615 kubelet[2444]: I0209 19:38:25.351571 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.351890 kubelet[2444]: I0209 19:38:25.351878 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.351999 kubelet[2444]: I0209 19:38:25.351987 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352087 kubelet[2444]: I0209 19:38:25.352078 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d72a535e955621138d125e1fb678409-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-4c52a92a5f\" (UID: \"1d72a535e955621138d125e1fb678409\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352179 kubelet[2444]: I0209 19:38:25.352167 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352319 kubelet[2444]: I0209 19:38:25.352305 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352429 kubelet[2444]: I0209 19:38:25.352420 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ffe9a06f978f0b72085ec7397b9225d3-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" (UID: \"ffe9a06f978f0b72085ec7397b9225d3\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352520 kubelet[2444]: I0209 19:38:25.352513 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.352628 kubelet[2444]: I0209 19:38:25.352619 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f13b63b48a3b536819b981c8780ce27e-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" (UID: \"f13b63b48a3b536819b981c8780ce27e\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:25.613068 sudo[2456]: pam_unix(sudo:session): session closed for user root Feb 9 19:38:26.003573 kubelet[2444]: I0209 19:38:26.003499 2444 apiserver.go:52] "Watching apiserver" Feb 9 19:38:26.050486 kubelet[2444]: I0209 19:38:26.050427 2444 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:38:26.127719 kubelet[2444]: W0209 19:38:26.127676 2444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:26.127994 kubelet[2444]: E0209 19:38:26.127968 2444 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-4c52a92a5f\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:26.131891 kubelet[2444]: W0209 19:38:26.131862 2444 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:38:26.132049 kubelet[2444]: E0209 19:38:26.131940 2444 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-4c52a92a5f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" Feb 9 19:38:26.141093 kubelet[2444]: I0209 19:38:26.141062 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-4c52a92a5f" podStartSLOduration=1.141004622 podCreationTimestamp="2024-02-09 19:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:38:26.140300414 +0000 UTC m=+1.234326957" watchObservedRunningTime="2024-02-09 19:38:26.141004622 +0000 UTC m=+1.235031165" Feb 9 19:38:26.157916 kubelet[2444]: I0209 19:38:26.157870 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-4c52a92a5f" podStartSLOduration=3.157790509 podCreationTimestamp="2024-02-09 19:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:38:26.14983082 +0000 UTC m=+1.243857463" watchObservedRunningTime="2024-02-09 19:38:26.157790509 +0000 UTC m=+1.251817152" Feb 9 19:38:26.164801 kubelet[2444]: I0209 19:38:26.164766 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" podStartSLOduration=1.164725886 podCreationTimestamp="2024-02-09 19:38:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:38:26.158403216 +0000 UTC m=+1.252429859" watchObservedRunningTime="2024-02-09 19:38:26.164725886 +0000 UTC m=+1.258752429" Feb 9 19:38:26.650358 sudo[1672]: pam_unix(sudo:session): session closed for user root Feb 9 19:38:26.751665 sshd[1669]: pam_unix(sshd:session): session closed for user core Feb 9 19:38:26.755686 systemd[1]: sshd@4-10.200.8.13:22-10.200.12.6:54618.service: Deactivated successfully. Feb 9 19:38:26.757013 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:38:26.757301 systemd[1]: session-7.scope: Consumed 4.524s CPU time. Feb 9 19:38:26.757992 systemd-logind[1327]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:38:26.759193 systemd-logind[1327]: Removed session 7. Feb 9 19:38:37.900864 kubelet[2444]: I0209 19:38:37.900824 2444 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:38:37.901428 env[1342]: time="2024-02-09T19:38:37.901309358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:38:37.901791 kubelet[2444]: I0209 19:38:37.901630 2444 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:38:38.795234 kubelet[2444]: I0209 19:38:38.795188 2444 topology_manager.go:215] "Topology Admit Handler" podUID="4095c976-ab93-4185-b876-f45fb5d13ed2" podNamespace="kube-system" podName="kube-proxy-rcpph" Feb 9 19:38:38.796614 kubelet[2444]: I0209 19:38:38.796586 2444 topology_manager.go:215] "Topology Admit Handler" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" podNamespace="kube-system" podName="cilium-h5zxj" Feb 9 19:38:38.802487 systemd[1]: Created slice kubepods-besteffort-pod4095c976_ab93_4185_b876_f45fb5d13ed2.slice. Feb 9 19:38:38.817015 systemd[1]: Created slice kubepods-burstable-pod26dce5ed_fc83_4035_a080_496f91ca8608.slice. Feb 9 19:38:38.843340 kubelet[2444]: I0209 19:38:38.843304 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-config-path\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.843616 kubelet[2444]: I0209 19:38:38.843599 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-xtables-lock\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.843759 kubelet[2444]: I0209 19:38:38.843746 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-bpf-maps\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.843887 kubelet[2444]: I0209 19:38:38.843875 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-etc-cni-netd\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844014 kubelet[2444]: I0209 19:38:38.844001 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-net\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844135 kubelet[2444]: I0209 19:38:38.844122 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cni-path\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844275 kubelet[2444]: I0209 19:38:38.844244 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdrql\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-kube-api-access-gdrql\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844338 kubelet[2444]: I0209 19:38:38.844317 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4095c976-ab93-4185-b876-f45fb5d13ed2-xtables-lock\") pod \"kube-proxy-rcpph\" (UID: \"4095c976-ab93-4185-b876-f45fb5d13ed2\") " pod="kube-system/kube-proxy-rcpph" Feb 9 19:38:38.844390 kubelet[2444]: I0209 19:38:38.844381 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-cgroup\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844443 kubelet[2444]: I0209 19:38:38.844418 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4095c976-ab93-4185-b876-f45fb5d13ed2-lib-modules\") pod \"kube-proxy-rcpph\" (UID: \"4095c976-ab93-4185-b876-f45fb5d13ed2\") " pod="kube-system/kube-proxy-rcpph" Feb 9 19:38:38.844491 kubelet[2444]: I0209 19:38:38.844471 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rpq\" (UniqueName: \"kubernetes.io/projected/4095c976-ab93-4185-b876-f45fb5d13ed2-kube-api-access-h9rpq\") pod \"kube-proxy-rcpph\" (UID: \"4095c976-ab93-4185-b876-f45fb5d13ed2\") " pod="kube-system/kube-proxy-rcpph" Feb 9 19:38:38.844536 kubelet[2444]: I0209 19:38:38.844518 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-run\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844588 kubelet[2444]: I0209 19:38:38.844563 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-lib-modules\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844639 kubelet[2444]: I0209 19:38:38.844605 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4095c976-ab93-4185-b876-f45fb5d13ed2-kube-proxy\") pod \"kube-proxy-rcpph\" (UID: \"4095c976-ab93-4185-b876-f45fb5d13ed2\") " pod="kube-system/kube-proxy-rcpph" Feb 9 19:38:38.844687 kubelet[2444]: I0209 19:38:38.844651 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-hostproc\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844730 kubelet[2444]: I0209 19:38:38.844692 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26dce5ed-fc83-4035-a080-496f91ca8608-clustermesh-secrets\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844776 kubelet[2444]: I0209 19:38:38.844736 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-kernel\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.844776 kubelet[2444]: I0209 19:38:38.844764 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-hubble-tls\") pod \"cilium-h5zxj\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " pod="kube-system/cilium-h5zxj" Feb 9 19:38:38.881796 kubelet[2444]: I0209 19:38:38.881752 2444 topology_manager.go:215] "Topology Admit Handler" podUID="bbb41251-cb97-44c7-8ba0-e945d7b32396" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-2cqvf" Feb 9 19:38:38.888338 systemd[1]: Created slice kubepods-besteffort-podbbb41251_cb97_44c7_8ba0_e945d7b32396.slice. Feb 9 19:38:38.945485 kubelet[2444]: I0209 19:38:38.945442 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbb41251-cb97-44c7-8ba0-e945d7b32396-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-2cqvf\" (UID: \"bbb41251-cb97-44c7-8ba0-e945d7b32396\") " pod="kube-system/cilium-operator-6bc8ccdb58-2cqvf" Feb 9 19:38:38.945965 kubelet[2444]: I0209 19:38:38.945682 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmlmv\" (UniqueName: \"kubernetes.io/projected/bbb41251-cb97-44c7-8ba0-e945d7b32396-kube-api-access-gmlmv\") pod \"cilium-operator-6bc8ccdb58-2cqvf\" (UID: \"bbb41251-cb97-44c7-8ba0-e945d7b32396\") " pod="kube-system/cilium-operator-6bc8ccdb58-2cqvf" Feb 9 19:38:39.115690 env[1342]: time="2024-02-09T19:38:39.115486253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcpph,Uid:4095c976-ab93-4185-b876-f45fb5d13ed2,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:39.122216 env[1342]: time="2024-02-09T19:38:39.122158908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5zxj,Uid:26dce5ed-fc83-4035-a080-496f91ca8608,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:39.172635 env[1342]: time="2024-02-09T19:38:39.172425521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:39.172635 env[1342]: time="2024-02-09T19:38:39.172482622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:39.172635 env[1342]: time="2024-02-09T19:38:39.172496722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:39.172972 env[1342]: time="2024-02-09T19:38:39.172784424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16b91f597cd85eadfbf5e55dfcc3fbda36b26b0f32a607d1bb7ae84b930aa947 pid=2523 runtime=io.containerd.runc.v2 Feb 9 19:38:39.180312 env[1342]: time="2024-02-09T19:38:39.180206585Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:39.180649 env[1342]: time="2024-02-09T19:38:39.180611088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:39.180887 env[1342]: time="2024-02-09T19:38:39.180855090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:39.181528 env[1342]: time="2024-02-09T19:38:39.181475996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f pid=2543 runtime=io.containerd.runc.v2 Feb 9 19:38:39.193724 env[1342]: time="2024-02-09T19:38:39.193670296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-2cqvf,Uid:bbb41251-cb97-44c7-8ba0-e945d7b32396,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:39.198565 systemd[1]: Started cri-containerd-31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f.scope. Feb 9 19:38:39.206595 systemd[1]: Started cri-containerd-16b91f597cd85eadfbf5e55dfcc3fbda36b26b0f32a607d1bb7ae84b930aa947.scope. Feb 9 19:38:39.241719 env[1342]: time="2024-02-09T19:38:39.241661590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h5zxj,Uid:26dce5ed-fc83-4035-a080-496f91ca8608,Namespace:kube-system,Attempt:0,} returns sandbox id \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\"" Feb 9 19:38:39.247665 env[1342]: time="2024-02-09T19:38:39.247516538Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:38:39.266852 env[1342]: time="2024-02-09T19:38:39.266766796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:38:39.267114 env[1342]: time="2024-02-09T19:38:39.266805997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:38:39.267114 env[1342]: time="2024-02-09T19:38:39.266819897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:38:39.267114 env[1342]: time="2024-02-09T19:38:39.266981698Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06 pid=2602 runtime=io.containerd.runc.v2 Feb 9 19:38:39.270461 env[1342]: time="2024-02-09T19:38:39.270413826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rcpph,Uid:4095c976-ab93-4185-b876-f45fb5d13ed2,Namespace:kube-system,Attempt:0,} returns sandbox id \"16b91f597cd85eadfbf5e55dfcc3fbda36b26b0f32a607d1bb7ae84b930aa947\"" Feb 9 19:38:39.274671 env[1342]: time="2024-02-09T19:38:39.273728153Z" level=info msg="CreateContainer within sandbox \"16b91f597cd85eadfbf5e55dfcc3fbda36b26b0f32a607d1bb7ae84b930aa947\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:38:39.290072 systemd[1]: Started cri-containerd-dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06.scope. Feb 9 19:38:39.318576 env[1342]: time="2024-02-09T19:38:39.317076210Z" level=info msg="CreateContainer within sandbox \"16b91f597cd85eadfbf5e55dfcc3fbda36b26b0f32a607d1bb7ae84b930aa947\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7bbb86ee3b966ab5d0cca0545ae05cf1ca96d64c291259f1d9d12aaeabf19d56\"" Feb 9 19:38:39.318576 env[1342]: time="2024-02-09T19:38:39.318094418Z" level=info msg="StartContainer for \"7bbb86ee3b966ab5d0cca0545ae05cf1ca96d64c291259f1d9d12aaeabf19d56\"" Feb 9 19:38:39.341128 systemd[1]: Started cri-containerd-7bbb86ee3b966ab5d0cca0545ae05cf1ca96d64c291259f1d9d12aaeabf19d56.scope. Feb 9 19:38:39.370966 env[1342]: time="2024-02-09T19:38:39.369601641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-2cqvf,Uid:bbb41251-cb97-44c7-8ba0-e945d7b32396,Namespace:kube-system,Attempt:0,} returns sandbox id \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\"" Feb 9 19:38:39.403113 env[1342]: time="2024-02-09T19:38:39.403051016Z" level=info msg="StartContainer for \"7bbb86ee3b966ab5d0cca0545ae05cf1ca96d64c291259f1d9d12aaeabf19d56\" returns successfully" Feb 9 19:38:44.344598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3341141160.mount: Deactivated successfully. Feb 9 19:38:45.089887 kubelet[2444]: I0209 19:38:45.089853 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rcpph" podStartSLOduration=7.089790291 podCreationTimestamp="2024-02-09 19:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:38:40.158042091 +0000 UTC m=+15.252068634" watchObservedRunningTime="2024-02-09 19:38:45.089790291 +0000 UTC m=+20.183816834" Feb 9 19:38:47.169832 env[1342]: time="2024-02-09T19:38:47.169770333Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:47.178676 env[1342]: time="2024-02-09T19:38:47.178624694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:47.183475 env[1342]: time="2024-02-09T19:38:47.183430327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:47.184153 env[1342]: time="2024-02-09T19:38:47.184112432Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:38:47.187429 env[1342]: time="2024-02-09T19:38:47.185865344Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:38:47.187429 env[1342]: time="2024-02-09T19:38:47.187168453Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:38:47.246984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778503728.mount: Deactivated successfully. Feb 9 19:38:47.273009 env[1342]: time="2024-02-09T19:38:47.272940849Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\"" Feb 9 19:38:47.275699 env[1342]: time="2024-02-09T19:38:47.273662154Z" level=info msg="StartContainer for \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\"" Feb 9 19:38:47.296423 systemd[1]: Started cri-containerd-7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd.scope. Feb 9 19:38:47.338328 env[1342]: time="2024-02-09T19:38:47.338270902Z" level=info msg="StartContainer for \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\" returns successfully" Feb 9 19:38:47.345533 systemd[1]: cri-containerd-7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd.scope: Deactivated successfully. Feb 9 19:38:48.244459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd-rootfs.mount: Deactivated successfully. Feb 9 19:38:51.082014 env[1342]: time="2024-02-09T19:38:51.081940332Z" level=info msg="shim disconnected" id=7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd Feb 9 19:38:51.082014 env[1342]: time="2024-02-09T19:38:51.082006432Z" level=warning msg="cleaning up after shim disconnected" id=7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd namespace=k8s.io Feb 9 19:38:51.082014 env[1342]: time="2024-02-09T19:38:51.082021633Z" level=info msg="cleaning up dead shim" Feb 9 19:38:51.091668 env[1342]: time="2024-02-09T19:38:51.091600894Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2854 runtime=io.containerd.runc.v2\n" Feb 9 19:38:51.175222 env[1342]: time="2024-02-09T19:38:51.175166130Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:38:51.224700 env[1342]: time="2024-02-09T19:38:51.224636747Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\"" Feb 9 19:38:51.225784 env[1342]: time="2024-02-09T19:38:51.225737854Z" level=info msg="StartContainer for \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\"" Feb 9 19:38:51.248940 systemd[1]: Started cri-containerd-dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a.scope. Feb 9 19:38:51.290307 env[1342]: time="2024-02-09T19:38:51.290082367Z" level=info msg="StartContainer for \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\" returns successfully" Feb 9 19:38:51.299217 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:38:51.299576 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:38:51.300719 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:38:51.302924 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:38:51.306074 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:38:51.308843 systemd[1]: cri-containerd-dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a.scope: Deactivated successfully. Feb 9 19:38:51.317502 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:38:51.354391 env[1342]: time="2024-02-09T19:38:51.353213672Z" level=info msg="shim disconnected" id=dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a Feb 9 19:38:51.354391 env[1342]: time="2024-02-09T19:38:51.353267672Z" level=warning msg="cleaning up after shim disconnected" id=dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a namespace=k8s.io Feb 9 19:38:51.354391 env[1342]: time="2024-02-09T19:38:51.353281673Z" level=info msg="cleaning up dead shim" Feb 9 19:38:51.362902 env[1342]: time="2024-02-09T19:38:51.362857934Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:38:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" Feb 9 19:38:52.200879 env[1342]: time="2024-02-09T19:38:52.197844566Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:38:52.212068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a-rootfs.mount: Deactivated successfully. Feb 9 19:38:52.239962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282051645.mount: Deactivated successfully. Feb 9 19:38:52.247594 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579273588.mount: Deactivated successfully. Feb 9 19:38:52.262391 env[1342]: time="2024-02-09T19:38:52.262326072Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\"" Feb 9 19:38:52.265819 env[1342]: time="2024-02-09T19:38:52.265766093Z" level=info msg="StartContainer for \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\"" Feb 9 19:38:52.292431 systemd[1]: Started cri-containerd-194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0.scope. Feb 9 19:38:52.343976 systemd[1]: cri-containerd-194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0.scope: Deactivated successfully. Feb 9 19:38:52.345456 env[1342]: time="2024-02-09T19:38:52.345367694Z" level=info msg="StartContainer for \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\" returns successfully" Feb 9 19:38:52.470113 env[1342]: time="2024-02-09T19:38:52.469806777Z" level=info msg="shim disconnected" id=194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0 Feb 9 19:38:52.471142 env[1342]: time="2024-02-09T19:38:52.471110486Z" level=warning msg="cleaning up after shim disconnected" id=194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0 namespace=k8s.io Feb 9 19:38:52.471261 env[1342]: time="2024-02-09T19:38:52.471247087Z" level=info msg="cleaning up dead shim" Feb 9 19:38:52.488038 env[1342]: time="2024-02-09T19:38:52.487980292Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:38:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2977 runtime=io.containerd.runc.v2\n" Feb 9 19:38:52.939901 env[1342]: time="2024-02-09T19:38:52.939833035Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:52.945186 env[1342]: time="2024-02-09T19:38:52.945096468Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:52.948843 env[1342]: time="2024-02-09T19:38:52.948785592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:38:52.949252 env[1342]: time="2024-02-09T19:38:52.949219194Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:38:52.952206 env[1342]: time="2024-02-09T19:38:52.952097513Z" level=info msg="CreateContainer within sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:38:52.978183 env[1342]: time="2024-02-09T19:38:52.978121776Z" level=info msg="CreateContainer within sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\"" Feb 9 19:38:52.979066 env[1342]: time="2024-02-09T19:38:52.979022082Z" level=info msg="StartContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\"" Feb 9 19:38:52.999238 systemd[1]: Started cri-containerd-ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e.scope. Feb 9 19:38:53.039164 env[1342]: time="2024-02-09T19:38:53.039101256Z" level=info msg="StartContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" returns successfully" Feb 9 19:38:53.184142 env[1342]: time="2024-02-09T19:38:53.184085151Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:38:53.239938 env[1342]: time="2024-02-09T19:38:53.239793295Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\"" Feb 9 19:38:53.240823 env[1342]: time="2024-02-09T19:38:53.240705701Z" level=info msg="StartContainer for \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\"" Feb 9 19:38:53.270480 systemd[1]: Started cri-containerd-f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960.scope. Feb 9 19:38:53.328458 systemd[1]: cri-containerd-f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960.scope: Deactivated successfully. Feb 9 19:38:53.342600 env[1342]: time="2024-02-09T19:38:53.341451823Z" level=info msg="StartContainer for \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\" returns successfully" Feb 9 19:38:53.712749 env[1342]: time="2024-02-09T19:38:53.712679915Z" level=info msg="shim disconnected" id=f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960 Feb 9 19:38:53.712749 env[1342]: time="2024-02-09T19:38:53.712748616Z" level=warning msg="cleaning up after shim disconnected" id=f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960 namespace=k8s.io Feb 9 19:38:53.712749 env[1342]: time="2024-02-09T19:38:53.712761116Z" level=info msg="cleaning up dead shim" Feb 9 19:38:53.731063 env[1342]: time="2024-02-09T19:38:53.730996628Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:38:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3073 runtime=io.containerd.runc.v2\n" Feb 9 19:38:54.192626 env[1342]: time="2024-02-09T19:38:54.192570857Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:38:54.211465 kubelet[2444]: I0209 19:38:54.208887 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-2cqvf" podStartSLOduration=2.631856625 podCreationTimestamp="2024-02-09 19:38:38 +0000 UTC" firstStartedPulling="2024-02-09 19:38:39.372870068 +0000 UTC m=+14.466896711" lastFinishedPulling="2024-02-09 19:38:52.949834498 +0000 UTC m=+28.043861141" observedRunningTime="2024-02-09 19:38:53.238409486 +0000 UTC m=+28.332436029" watchObservedRunningTime="2024-02-09 19:38:54.208821055 +0000 UTC m=+29.302847698" Feb 9 19:38:54.214877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960-rootfs.mount: Deactivated successfully. Feb 9 19:38:54.235441 env[1342]: time="2024-02-09T19:38:54.235377916Z" level=info msg="CreateContainer within sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\"" Feb 9 19:38:54.236142 env[1342]: time="2024-02-09T19:38:54.236099421Z" level=info msg="StartContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\"" Feb 9 19:38:54.264708 systemd[1]: Started cri-containerd-dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0.scope. Feb 9 19:38:54.310976 env[1342]: time="2024-02-09T19:38:54.310911174Z" level=info msg="StartContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" returns successfully" Feb 9 19:38:54.466271 kubelet[2444]: I0209 19:38:54.465218 2444 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:38:54.503244 kubelet[2444]: I0209 19:38:54.503197 2444 topology_manager.go:215] "Topology Admit Handler" podUID="be421715-0b92-41bf-b2d6-632ead8482b9" podNamespace="kube-system" podName="coredns-5dd5756b68-rkn92" Feb 9 19:38:54.510456 systemd[1]: Created slice kubepods-burstable-podbe421715_0b92_41bf_b2d6_632ead8482b9.slice. Feb 9 19:38:54.519519 kubelet[2444]: I0209 19:38:54.519479 2444 topology_manager.go:215] "Topology Admit Handler" podUID="6e1c969f-5201-4495-a04f-a4abd8c02ceb" podNamespace="kube-system" podName="coredns-5dd5756b68-c9cdc" Feb 9 19:38:54.527114 systemd[1]: Created slice kubepods-burstable-pod6e1c969f_5201_4495_a04f_a4abd8c02ceb.slice. Feb 9 19:38:54.536485 kubelet[2444]: W0209 19:38:54.536436 2444 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-4c52a92a5f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4c52a92a5f' and this object Feb 9 19:38:54.536743 kubelet[2444]: E0209 19:38:54.536731 2444 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-4c52a92a5f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-4c52a92a5f' and this object Feb 9 19:38:54.650370 kubelet[2444]: I0209 19:38:54.650326 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft7jq\" (UniqueName: \"kubernetes.io/projected/be421715-0b92-41bf-b2d6-632ead8482b9-kube-api-access-ft7jq\") pod \"coredns-5dd5756b68-rkn92\" (UID: \"be421715-0b92-41bf-b2d6-632ead8482b9\") " pod="kube-system/coredns-5dd5756b68-rkn92" Feb 9 19:38:54.650786 kubelet[2444]: I0209 19:38:54.650767 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs694\" (UniqueName: \"kubernetes.io/projected/6e1c969f-5201-4495-a04f-a4abd8c02ceb-kube-api-access-qs694\") pod \"coredns-5dd5756b68-c9cdc\" (UID: \"6e1c969f-5201-4495-a04f-a4abd8c02ceb\") " pod="kube-system/coredns-5dd5756b68-c9cdc" Feb 9 19:38:54.650970 kubelet[2444]: I0209 19:38:54.650958 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be421715-0b92-41bf-b2d6-632ead8482b9-config-volume\") pod \"coredns-5dd5756b68-rkn92\" (UID: \"be421715-0b92-41bf-b2d6-632ead8482b9\") " pod="kube-system/coredns-5dd5756b68-rkn92" Feb 9 19:38:54.651094 kubelet[2444]: I0209 19:38:54.651084 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e1c969f-5201-4495-a04f-a4abd8c02ceb-config-volume\") pod \"coredns-5dd5756b68-c9cdc\" (UID: \"6e1c969f-5201-4495-a04f-a4abd8c02ceb\") " pod="kube-system/coredns-5dd5756b68-c9cdc" Feb 9 19:38:55.207328 kubelet[2444]: I0209 19:38:55.207276 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h5zxj" podStartSLOduration=9.266412957 podCreationTimestamp="2024-02-09 19:38:38 +0000 UTC" firstStartedPulling="2024-02-09 19:38:39.243843108 +0000 UTC m=+14.337869651" lastFinishedPulling="2024-02-09 19:38:47.184654436 +0000 UTC m=+22.278680979" observedRunningTime="2024-02-09 19:38:55.206179778 +0000 UTC m=+30.300206321" watchObservedRunningTime="2024-02-09 19:38:55.207224285 +0000 UTC m=+30.301250928" Feb 9 19:38:55.752787 kubelet[2444]: E0209 19:38:55.752732 2444 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:38:55.753358 kubelet[2444]: E0209 19:38:55.752880 2444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e1c969f-5201-4495-a04f-a4abd8c02ceb-config-volume podName:6e1c969f-5201-4495-a04f-a4abd8c02ceb nodeName:}" failed. No retries permitted until 2024-02-09 19:38:56.252848432 +0000 UTC m=+31.346874975 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6e1c969f-5201-4495-a04f-a4abd8c02ceb-config-volume") pod "coredns-5dd5756b68-c9cdc" (UID: "6e1c969f-5201-4495-a04f-a4abd8c02ceb") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:38:55.753358 kubelet[2444]: E0209 19:38:55.752731 2444 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:38:55.753358 kubelet[2444]: E0209 19:38:55.753262 2444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/be421715-0b92-41bf-b2d6-632ead8482b9-config-volume podName:be421715-0b92-41bf-b2d6-632ead8482b9 nodeName:}" failed. No retries permitted until 2024-02-09 19:38:56.253241334 +0000 UTC m=+31.347267877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/be421715-0b92-41bf-b2d6-632ead8482b9-config-volume") pod "coredns-5dd5756b68-rkn92" (UID: "be421715-0b92-41bf-b2d6-632ead8482b9") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:38:56.316854 env[1342]: time="2024-02-09T19:38:56.316793354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rkn92,Uid:be421715-0b92-41bf-b2d6-632ead8482b9,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:56.333689 env[1342]: time="2024-02-09T19:38:56.333637353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9cdc,Uid:6e1c969f-5201-4495-a04f-a4abd8c02ceb,Namespace:kube-system,Attempt:0,}" Feb 9 19:38:56.631175 systemd-networkd[1485]: cilium_host: Link UP Feb 9 19:38:56.637570 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:38:56.637693 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:38:56.636480 systemd-networkd[1485]: cilium_net: Link UP Feb 9 19:38:56.637882 systemd-networkd[1485]: cilium_net: Gained carrier Feb 9 19:38:56.642829 systemd-networkd[1485]: cilium_host: Gained carrier Feb 9 19:38:56.836442 systemd-networkd[1485]: cilium_vxlan: Link UP Feb 9 19:38:56.836453 systemd-networkd[1485]: cilium_vxlan: Gained carrier Feb 9 19:38:57.096589 kernel: NET: Registered PF_ALG protocol family Feb 9 19:38:57.106721 systemd-networkd[1485]: cilium_net: Gained IPv6LL Feb 9 19:38:57.522783 systemd-networkd[1485]: cilium_host: Gained IPv6LL Feb 9 19:38:57.861012 systemd-networkd[1485]: lxc_health: Link UP Feb 9 19:38:57.886685 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:38:57.885908 systemd-networkd[1485]: lxc_health: Gained carrier Feb 9 19:38:58.394696 systemd-networkd[1485]: lxc3eac90d782d8: Link UP Feb 9 19:38:58.403633 kernel: eth0: renamed from tmp9d62c Feb 9 19:38:58.413659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3eac90d782d8: link becomes ready Feb 9 19:38:58.413366 systemd-networkd[1485]: lxc3eac90d782d8: Gained carrier Feb 9 19:38:58.432569 systemd-networkd[1485]: lxc8a08500eb438: Link UP Feb 9 19:38:58.439633 kernel: eth0: renamed from tmp8bb86 Feb 9 19:38:58.453750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8a08500eb438: link becomes ready Feb 9 19:38:58.453300 systemd-networkd[1485]: lxc8a08500eb438: Gained carrier Feb 9 19:38:58.483825 systemd-networkd[1485]: cilium_vxlan: Gained IPv6LL Feb 9 19:38:59.250837 systemd-networkd[1485]: lxc_health: Gained IPv6LL Feb 9 19:38:59.890801 systemd-networkd[1485]: lxc8a08500eb438: Gained IPv6LL Feb 9 19:38:59.954841 systemd-networkd[1485]: lxc3eac90d782d8: Gained IPv6LL Feb 9 19:39:02.362739 env[1342]: time="2024-02-09T19:39:02.362653190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:39:02.363334 env[1342]: time="2024-02-09T19:39:02.363272493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:39:02.363483 env[1342]: time="2024-02-09T19:39:02.363455594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:39:02.363816 env[1342]: time="2024-02-09T19:39:02.363770996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9d62cbba35857cfc8817830d6bee729bbebdf106631504e3a03211a4d2c2436c pid=3641 runtime=io.containerd.runc.v2 Feb 9 19:39:02.381059 env[1342]: time="2024-02-09T19:39:02.380929986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:39:02.381059 env[1342]: time="2024-02-09T19:39:02.381016887Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:39:02.381348 env[1342]: time="2024-02-09T19:39:02.381032187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:39:02.381440 env[1342]: time="2024-02-09T19:39:02.381398889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e pid=3642 runtime=io.containerd.runc.v2 Feb 9 19:39:02.391313 systemd[1]: Started cri-containerd-9d62cbba35857cfc8817830d6bee729bbebdf106631504e3a03211a4d2c2436c.scope. Feb 9 19:39:02.429434 systemd[1]: Started cri-containerd-8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e.scope. Feb 9 19:39:02.519578 env[1342]: time="2024-02-09T19:39:02.519503516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rkn92,Uid:be421715-0b92-41bf-b2d6-632ead8482b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d62cbba35857cfc8817830d6bee729bbebdf106631504e3a03211a4d2c2436c\"" Feb 9 19:39:02.523473 env[1342]: time="2024-02-09T19:39:02.523426737Z" level=info msg="CreateContainer within sandbox \"9d62cbba35857cfc8817830d6bee729bbebdf106631504e3a03211a4d2c2436c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:39:02.546892 env[1342]: time="2024-02-09T19:39:02.546832960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c9cdc,Uid:6e1c969f-5201-4495-a04f-a4abd8c02ceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e\"" Feb 9 19:39:02.551179 env[1342]: time="2024-02-09T19:39:02.551124683Z" level=info msg="CreateContainer within sandbox \"8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:39:02.573603 env[1342]: time="2024-02-09T19:39:02.573492001Z" level=info msg="CreateContainer within sandbox \"9d62cbba35857cfc8817830d6bee729bbebdf106631504e3a03211a4d2c2436c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5d633a1cc1fe66df4b932150257070d0ae836af3f0152db009737d40da60bbe7\"" Feb 9 19:39:02.574473 env[1342]: time="2024-02-09T19:39:02.574416906Z" level=info msg="StartContainer for \"5d633a1cc1fe66df4b932150257070d0ae836af3f0152db009737d40da60bbe7\"" Feb 9 19:39:02.593720 env[1342]: time="2024-02-09T19:39:02.593642707Z" level=info msg="CreateContainer within sandbox \"8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"044a17269d7f8755d7fd6215f8af25906f18c92d00d79968e782c458b3c9c361\"" Feb 9 19:39:02.596144 env[1342]: time="2024-02-09T19:39:02.596088920Z" level=info msg="StartContainer for \"044a17269d7f8755d7fd6215f8af25906f18c92d00d79968e782c458b3c9c361\"" Feb 9 19:39:02.609758 systemd[1]: Started cri-containerd-5d633a1cc1fe66df4b932150257070d0ae836af3f0152db009737d40da60bbe7.scope. Feb 9 19:39:02.636935 systemd[1]: Started cri-containerd-044a17269d7f8755d7fd6215f8af25906f18c92d00d79968e782c458b3c9c361.scope. Feb 9 19:39:02.695654 env[1342]: time="2024-02-09T19:39:02.694517139Z" level=info msg="StartContainer for \"5d633a1cc1fe66df4b932150257070d0ae836af3f0152db009737d40da60bbe7\" returns successfully" Feb 9 19:39:02.722164 env[1342]: time="2024-02-09T19:39:02.722101384Z" level=info msg="StartContainer for \"044a17269d7f8755d7fd6215f8af25906f18c92d00d79968e782c458b3c9c361\" returns successfully" Feb 9 19:39:03.226414 kubelet[2444]: I0209 19:39:03.226372 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rkn92" podStartSLOduration=25.226330122 podCreationTimestamp="2024-02-09 19:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:39:03.224757414 +0000 UTC m=+38.318783957" watchObservedRunningTime="2024-02-09 19:39:03.226330122 +0000 UTC m=+38.320356665" Feb 9 19:39:03.370896 systemd[1]: run-containerd-runc-k8s.io-8bb86d6ea4daec6717ddbca34c2921e180bb747ac4184cb7915cd415ab93618e-runc.LcHUKJ.mount: Deactivated successfully. Feb 9 19:41:21.377966 systemd[1]: Started sshd@5-10.200.8.13:22-10.200.12.6:46938.service. Feb 9 19:41:21.990678 sshd[3811]: Accepted publickey for core from 10.200.12.6 port 46938 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:21.992343 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:21.997535 systemd-logind[1327]: New session 8 of user core. Feb 9 19:41:21.998536 systemd[1]: Started session-8.scope. Feb 9 19:41:22.596981 sshd[3811]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:22.600049 systemd[1]: sshd@5-10.200.8.13:22-10.200.12.6:46938.service: Deactivated successfully. Feb 9 19:41:22.601163 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:41:22.601954 systemd-logind[1327]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:41:22.602805 systemd-logind[1327]: Removed session 8. Feb 9 19:41:27.698759 systemd[1]: Started sshd@6-10.200.8.13:22-10.200.12.6:43648.service. Feb 9 19:41:28.327331 sshd[3828]: Accepted publickey for core from 10.200.12.6 port 43648 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:28.329193 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:28.334208 systemd-logind[1327]: New session 9 of user core. Feb 9 19:41:28.335342 systemd[1]: Started session-9.scope. Feb 9 19:41:28.826775 sshd[3828]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:28.830414 systemd[1]: sshd@6-10.200.8.13:22-10.200.12.6:43648.service: Deactivated successfully. Feb 9 19:41:28.831677 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:41:28.832597 systemd-logind[1327]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:41:28.833605 systemd-logind[1327]: Removed session 9. Feb 9 19:41:33.931322 systemd[1]: Started sshd@7-10.200.8.13:22-10.200.12.6:43654.service. Feb 9 19:41:34.543182 sshd[3842]: Accepted publickey for core from 10.200.12.6 port 43654 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:34.544861 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:34.550364 systemd[1]: Started session-10.scope. Feb 9 19:41:34.551058 systemd-logind[1327]: New session 10 of user core. Feb 9 19:41:35.039750 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:35.042977 systemd[1]: sshd@7-10.200.8.13:22-10.200.12.6:43654.service: Deactivated successfully. Feb 9 19:41:35.044102 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:41:35.044929 systemd-logind[1327]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:41:35.045790 systemd-logind[1327]: Removed session 10. Feb 9 19:41:40.148432 systemd[1]: Started sshd@8-10.200.8.13:22-10.200.12.6:54220.service. Feb 9 19:41:40.770639 sshd[3857]: Accepted publickey for core from 10.200.12.6 port 54220 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:40.772758 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:40.779615 systemd-logind[1327]: New session 11 of user core. Feb 9 19:41:40.781454 systemd[1]: Started session-11.scope. Feb 9 19:41:41.275816 sshd[3857]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:41.279016 systemd[1]: sshd@8-10.200.8.13:22-10.200.12.6:54220.service: Deactivated successfully. Feb 9 19:41:41.280129 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:41:41.280891 systemd-logind[1327]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:41:41.281794 systemd-logind[1327]: Removed session 11. Feb 9 19:41:41.384728 systemd[1]: Started sshd@9-10.200.8.13:22-10.200.12.6:54228.service. Feb 9 19:41:42.005306 sshd[3873]: Accepted publickey for core from 10.200.12.6 port 54228 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:42.007067 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:42.014142 systemd[1]: Started session-12.scope. Feb 9 19:41:42.014679 systemd-logind[1327]: New session 12 of user core. Feb 9 19:41:43.204345 sshd[3873]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:43.208994 systemd-logind[1327]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:41:43.209238 systemd[1]: sshd@9-10.200.8.13:22-10.200.12.6:54228.service: Deactivated successfully. Feb 9 19:41:43.210434 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:41:43.211940 systemd-logind[1327]: Removed session 12. Feb 9 19:41:43.308458 systemd[1]: Started sshd@10-10.200.8.13:22-10.200.12.6:54242.service. Feb 9 19:41:43.919170 sshd[3883]: Accepted publickey for core from 10.200.12.6 port 54242 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:43.920931 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:43.926102 systemd-logind[1327]: New session 13 of user core. Feb 9 19:41:43.926943 systemd[1]: Started session-13.scope. Feb 9 19:41:44.413382 sshd[3883]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:44.416490 systemd[1]: sshd@10-10.200.8.13:22-10.200.12.6:54242.service: Deactivated successfully. Feb 9 19:41:44.417805 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:41:44.418361 systemd-logind[1327]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:41:44.419355 systemd-logind[1327]: Removed session 13. Feb 9 19:41:49.520850 systemd[1]: Started sshd@11-10.200.8.13:22-10.200.12.6:49186.service. Feb 9 19:41:50.137930 sshd[3895]: Accepted publickey for core from 10.200.12.6 port 49186 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:50.139670 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:50.145653 systemd-logind[1327]: New session 14 of user core. Feb 9 19:41:50.146251 systemd[1]: Started session-14.scope. Feb 9 19:41:50.634409 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:50.638022 systemd[1]: sshd@11-10.200.8.13:22-10.200.12.6:49186.service: Deactivated successfully. Feb 9 19:41:50.639203 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:41:50.639950 systemd-logind[1327]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:41:50.640849 systemd-logind[1327]: Removed session 14. Feb 9 19:41:55.751867 systemd[1]: Started sshd@12-10.200.8.13:22-10.200.12.6:49196.service. Feb 9 19:41:56.398776 sshd[3907]: Accepted publickey for core from 10.200.12.6 port 49196 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:56.400440 sshd[3907]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:56.405977 systemd-logind[1327]: New session 15 of user core. Feb 9 19:41:56.406955 systemd[1]: Started session-15.scope. Feb 9 19:41:56.897610 sshd[3907]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:56.900879 systemd[1]: sshd@12-10.200.8.13:22-10.200.12.6:49196.service: Deactivated successfully. Feb 9 19:41:56.901977 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:41:56.902829 systemd-logind[1327]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:41:56.903666 systemd-logind[1327]: Removed session 15. Feb 9 19:41:57.017598 systemd[1]: Started sshd@13-10.200.8.13:22-10.200.12.6:49212.service. Feb 9 19:41:57.647373 sshd[3919]: Accepted publickey for core from 10.200.12.6 port 49212 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:57.648951 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:57.654599 systemd[1]: Started session-16.scope. Feb 9 19:41:57.655063 systemd-logind[1327]: New session 16 of user core. Feb 9 19:41:58.332351 sshd[3919]: pam_unix(sshd:session): session closed for user core Feb 9 19:41:58.336441 systemd[1]: sshd@13-10.200.8.13:22-10.200.12.6:49212.service: Deactivated successfully. Feb 9 19:41:58.337678 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:41:58.337807 systemd-logind[1327]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:41:58.339138 systemd-logind[1327]: Removed session 16. Feb 9 19:41:58.444077 systemd[1]: Started sshd@14-10.200.8.13:22-10.200.12.6:39844.service. Feb 9 19:41:59.067683 sshd[3928]: Accepted publickey for core from 10.200.12.6 port 39844 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:41:59.069623 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:41:59.075150 systemd-logind[1327]: New session 17 of user core. Feb 9 19:41:59.076442 systemd[1]: Started session-17.scope. Feb 9 19:42:00.434880 sshd[3928]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:00.438347 systemd[1]: sshd@14-10.200.8.13:22-10.200.12.6:39844.service: Deactivated successfully. Feb 9 19:42:00.439594 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:42:00.440435 systemd-logind[1327]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:42:00.441495 systemd-logind[1327]: Removed session 17. Feb 9 19:42:00.538901 systemd[1]: Started sshd@15-10.200.8.13:22-10.200.12.6:39860.service. Feb 9 19:42:01.156147 sshd[3949]: Accepted publickey for core from 10.200.12.6 port 39860 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:01.158039 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:01.163648 systemd[1]: Started session-18.scope. Feb 9 19:42:01.164379 systemd-logind[1327]: New session 18 of user core. Feb 9 19:42:01.855450 sshd[3949]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:01.859372 systemd[1]: sshd@15-10.200.8.13:22-10.200.12.6:39860.service: Deactivated successfully. Feb 9 19:42:01.860609 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:42:01.861664 systemd-logind[1327]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:42:01.862757 systemd-logind[1327]: Removed session 18. Feb 9 19:42:01.959825 systemd[1]: Started sshd@16-10.200.8.13:22-10.200.12.6:39876.service. Feb 9 19:42:02.576044 sshd[3959]: Accepted publickey for core from 10.200.12.6 port 39876 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:02.577667 sshd[3959]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:02.582942 systemd-logind[1327]: New session 19 of user core. Feb 9 19:42:02.583498 systemd[1]: Started session-19.scope. Feb 9 19:42:03.084200 sshd[3959]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:03.087887 systemd[1]: sshd@16-10.200.8.13:22-10.200.12.6:39876.service: Deactivated successfully. Feb 9 19:42:03.088875 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:42:03.089606 systemd-logind[1327]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:42:03.090447 systemd-logind[1327]: Removed session 19. Feb 9 19:42:08.192202 systemd[1]: Started sshd@17-10.200.8.13:22-10.200.12.6:33946.service. Feb 9 19:42:08.824648 sshd[3974]: Accepted publickey for core from 10.200.12.6 port 33946 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:08.826545 sshd[3974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:08.832840 systemd[1]: Started session-20.scope. Feb 9 19:42:08.833289 systemd-logind[1327]: New session 20 of user core. Feb 9 19:42:09.323333 sshd[3974]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:09.326921 systemd-logind[1327]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:42:09.327106 systemd[1]: sshd@17-10.200.8.13:22-10.200.12.6:33946.service: Deactivated successfully. Feb 9 19:42:09.328154 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:42:09.329177 systemd-logind[1327]: Removed session 20. Feb 9 19:42:14.431053 systemd[1]: Started sshd@18-10.200.8.13:22-10.200.12.6:33960.service. Feb 9 19:42:15.047451 sshd[3989]: Accepted publickey for core from 10.200.12.6 port 33960 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:15.049122 sshd[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:15.054368 systemd[1]: Started session-21.scope. Feb 9 19:42:15.055017 systemd-logind[1327]: New session 21 of user core. Feb 9 19:42:15.541926 sshd[3989]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:15.545713 systemd[1]: sshd@18-10.200.8.13:22-10.200.12.6:33960.service: Deactivated successfully. Feb 9 19:42:15.546819 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:42:15.547677 systemd-logind[1327]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:42:15.548718 systemd-logind[1327]: Removed session 21. Feb 9 19:42:20.648770 systemd[1]: Started sshd@19-10.200.8.13:22-10.200.12.6:34336.service. Feb 9 19:42:21.269825 sshd[4002]: Accepted publickey for core from 10.200.12.6 port 34336 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:21.271638 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:21.277787 systemd[1]: Started session-22.scope. Feb 9 19:42:21.278937 systemd-logind[1327]: New session 22 of user core. Feb 9 19:42:21.771107 sshd[4002]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:21.774285 systemd[1]: sshd@19-10.200.8.13:22-10.200.12.6:34336.service: Deactivated successfully. Feb 9 19:42:21.775401 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:42:21.776200 systemd-logind[1327]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:42:21.777521 systemd-logind[1327]: Removed session 22. Feb 9 19:42:21.874756 systemd[1]: Started sshd@20-10.200.8.13:22-10.200.12.6:34340.service. Feb 9 19:42:22.487880 sshd[4014]: Accepted publickey for core from 10.200.12.6 port 34340 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:22.489386 sshd[4014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:22.494847 systemd[1]: Started session-23.scope. Feb 9 19:42:22.495523 systemd-logind[1327]: New session 23 of user core. Feb 9 19:42:24.362723 kubelet[2444]: I0209 19:42:24.362675 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c9cdc" podStartSLOduration=226.362599192 podCreationTimestamp="2024-02-09 19:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:39:03.273708067 +0000 UTC m=+38.367734610" watchObservedRunningTime="2024-02-09 19:42:24.362599192 +0000 UTC m=+239.456625735" Feb 9 19:42:24.388077 env[1342]: time="2024-02-09T19:42:24.387976474Z" level=info msg="StopContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" with timeout 30 (s)" Feb 9 19:42:24.393958 env[1342]: time="2024-02-09T19:42:24.393238912Z" level=info msg="Stop container \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" with signal terminated" Feb 9 19:42:24.404545 systemd[1]: run-containerd-runc-k8s.io-dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0-runc.9IvRi1.mount: Deactivated successfully. Feb 9 19:42:24.424173 systemd[1]: cri-containerd-ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e.scope: Deactivated successfully. Feb 9 19:42:24.437349 env[1342]: time="2024-02-09T19:42:24.437125928Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:42:24.443742 env[1342]: time="2024-02-09T19:42:24.443690075Z" level=info msg="StopContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" with timeout 2 (s)" Feb 9 19:42:24.444191 env[1342]: time="2024-02-09T19:42:24.444141579Z" level=info msg="Stop container \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" with signal terminated" Feb 9 19:42:24.460699 systemd-networkd[1485]: lxc_health: Link DOWN Feb 9 19:42:24.460711 systemd-networkd[1485]: lxc_health: Lost carrier Feb 9 19:42:24.461716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e-rootfs.mount: Deactivated successfully. Feb 9 19:42:24.519144 systemd[1]: cri-containerd-dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0.scope: Deactivated successfully. Feb 9 19:42:24.519580 systemd[1]: cri-containerd-dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0.scope: Consumed 7.695s CPU time. Feb 9 19:42:24.528661 env[1342]: time="2024-02-09T19:42:24.528594686Z" level=info msg="shim disconnected" id=ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e Feb 9 19:42:24.528947 env[1342]: time="2024-02-09T19:42:24.528916188Z" level=warning msg="cleaning up after shim disconnected" id=ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e namespace=k8s.io Feb 9 19:42:24.529066 env[1342]: time="2024-02-09T19:42:24.529047889Z" level=info msg="cleaning up dead shim" Feb 9 19:42:24.550634 env[1342]: time="2024-02-09T19:42:24.550454543Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4072 runtime=io.containerd.runc.v2\n" Feb 9 19:42:24.554430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0-rootfs.mount: Deactivated successfully. Feb 9 19:42:24.560268 env[1342]: time="2024-02-09T19:42:24.560207113Z" level=info msg="StopContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" returns successfully" Feb 9 19:42:24.561646 env[1342]: time="2024-02-09T19:42:24.561062920Z" level=info msg="StopPodSandbox for \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\"" Feb 9 19:42:24.561646 env[1342]: time="2024-02-09T19:42:24.561135020Z" level=info msg="Container to stop \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.569784 systemd[1]: cri-containerd-dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06.scope: Deactivated successfully. Feb 9 19:42:24.577523 env[1342]: time="2024-02-09T19:42:24.577450438Z" level=info msg="shim disconnected" id=dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0 Feb 9 19:42:24.577523 env[1342]: time="2024-02-09T19:42:24.577521938Z" level=warning msg="cleaning up after shim disconnected" id=dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0 namespace=k8s.io Feb 9 19:42:24.577804 env[1342]: time="2024-02-09T19:42:24.577535738Z" level=info msg="cleaning up dead shim" Feb 9 19:42:24.596619 env[1342]: time="2024-02-09T19:42:24.596516875Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4103 runtime=io.containerd.runc.v2\n" Feb 9 19:42:24.602415 env[1342]: time="2024-02-09T19:42:24.602363517Z" level=info msg="StopContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" returns successfully" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603080122Z" level=info msg="StopPodSandbox for \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\"" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603141922Z" level=info msg="Container to stop \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603160122Z" level=info msg="Container to stop \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603170523Z" level=info msg="Container to stop \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603187523Z" level=info msg="Container to stop \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.603468 env[1342]: time="2024-02-09T19:42:24.603198223Z" level=info msg="Container to stop \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:24.610885 env[1342]: time="2024-02-09T19:42:24.610823278Z" level=info msg="shim disconnected" id=dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06 Feb 9 19:42:24.611851 env[1342]: time="2024-02-09T19:42:24.611802785Z" level=warning msg="cleaning up after shim disconnected" id=dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06 namespace=k8s.io Feb 9 19:42:24.612030 env[1342]: time="2024-02-09T19:42:24.612006086Z" level=info msg="cleaning up dead shim" Feb 9 19:42:24.613146 systemd[1]: cri-containerd-31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f.scope: Deactivated successfully. Feb 9 19:42:24.630899 env[1342]: time="2024-02-09T19:42:24.630843822Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4135 runtime=io.containerd.runc.v2\n" Feb 9 19:42:24.631288 env[1342]: time="2024-02-09T19:42:24.631250925Z" level=info msg="TearDown network for sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" successfully" Feb 9 19:42:24.631288 env[1342]: time="2024-02-09T19:42:24.631283725Z" level=info msg="StopPodSandbox for \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" returns successfully" Feb 9 19:42:24.656897 env[1342]: time="2024-02-09T19:42:24.656738408Z" level=info msg="shim disconnected" id=31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f Feb 9 19:42:24.656897 env[1342]: time="2024-02-09T19:42:24.656806508Z" level=warning msg="cleaning up after shim disconnected" id=31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f namespace=k8s.io Feb 9 19:42:24.656897 env[1342]: time="2024-02-09T19:42:24.656823409Z" level=info msg="cleaning up dead shim" Feb 9 19:42:24.666019 env[1342]: time="2024-02-09T19:42:24.665956874Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4161 runtime=io.containerd.runc.v2\n" Feb 9 19:42:24.666385 env[1342]: time="2024-02-09T19:42:24.666348977Z" level=info msg="TearDown network for sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" successfully" Feb 9 19:42:24.666622 env[1342]: time="2024-02-09T19:42:24.666384577Z" level=info msg="StopPodSandbox for \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" returns successfully" Feb 9 19:42:24.673296 kubelet[2444]: I0209 19:42:24.672882 2444 scope.go:117] "RemoveContainer" containerID="ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e" Feb 9 19:42:24.677622 env[1342]: time="2024-02-09T19:42:24.677108054Z" level=info msg="RemoveContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\"" Feb 9 19:42:24.696016 env[1342]: time="2024-02-09T19:42:24.695948190Z" level=info msg="RemoveContainer for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" returns successfully" Feb 9 19:42:24.696349 kubelet[2444]: I0209 19:42:24.696318 2444 scope.go:117] "RemoveContainer" containerID="ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e" Feb 9 19:42:24.696790 env[1342]: time="2024-02-09T19:42:24.696709095Z" level=error msg="ContainerStatus for \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\": not found" Feb 9 19:42:24.696961 kubelet[2444]: E0209 19:42:24.696939 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\": not found" containerID="ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e" Feb 9 19:42:24.697068 kubelet[2444]: I0209 19:42:24.697050 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e"} err="failed to get container status \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef6a829ee0ae241e9740bbd00a3b25c884b3c470d9e258fd77b24d09c227a61e\": not found" Feb 9 19:42:24.697136 kubelet[2444]: I0209 19:42:24.697075 2444 scope.go:117] "RemoveContainer" containerID="dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0" Feb 9 19:42:24.698259 env[1342]: time="2024-02-09T19:42:24.698227806Z" level=info msg="RemoveContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\"" Feb 9 19:42:24.710171 env[1342]: time="2024-02-09T19:42:24.710113992Z" level=info msg="RemoveContainer for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" returns successfully" Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710592 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-bpf-maps\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710632 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-cgroup\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710661 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-kernel\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710704 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbb41251-cb97-44c7-8ba0-e945d7b32396-cilium-config-path\") pod \"bbb41251-cb97-44c7-8ba0-e945d7b32396\" (UID: \"bbb41251-cb97-44c7-8ba0-e945d7b32396\") " Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710736 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmlmv\" (UniqueName: \"kubernetes.io/projected/bbb41251-cb97-44c7-8ba0-e945d7b32396-kube-api-access-gmlmv\") pod \"bbb41251-cb97-44c7-8ba0-e945d7b32396\" (UID: \"bbb41251-cb97-44c7-8ba0-e945d7b32396\") " Feb 9 19:42:24.710787 kubelet[2444]: I0209 19:42:24.710761 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-net\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710788 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-hubble-tls\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710820 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-config-path\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710842 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-xtables-lock\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710870 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdrql\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-kube-api-access-gdrql\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710893 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-hostproc\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711135 kubelet[2444]: I0209 19:42:24.710913 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-run\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.710937 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cni-path\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.710973 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26dce5ed-fc83-4035-a080-496f91ca8608-clustermesh-secrets\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.710999 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-etc-cni-netd\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.711025 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-lib-modules\") pod \"26dce5ed-fc83-4035-a080-496f91ca8608\" (UID: \"26dce5ed-fc83-4035-a080-496f91ca8608\") " Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.711094 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.711398 kubelet[2444]: I0209 19:42:24.711139 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.711695 kubelet[2444]: I0209 19:42:24.711161 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.711695 kubelet[2444]: I0209 19:42:24.711182 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.713375 kubelet[2444]: I0209 19:42:24.713333 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb41251-cb97-44c7-8ba0-e945d7b32396-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bbb41251-cb97-44c7-8ba0-e945d7b32396" (UID: "bbb41251-cb97-44c7-8ba0-e945d7b32396"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:42:24.713797 kubelet[2444]: I0209 19:42:24.713754 2444 scope.go:117] "RemoveContainer" containerID="f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960" Feb 9 19:42:24.714107 kubelet[2444]: I0209 19:42:24.714083 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.716283 kubelet[2444]: I0209 19:42:24.716254 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:42:24.716380 kubelet[2444]: I0209 19:42:24.716300 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.716638 kubelet[2444]: I0209 19:42:24.716616 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cni-path" (OuterVolumeSpecName: "cni-path") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.716728 kubelet[2444]: I0209 19:42:24.716654 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-hostproc" (OuterVolumeSpecName: "hostproc") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.716728 kubelet[2444]: I0209 19:42:24.716679 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.716921 kubelet[2444]: I0209 19:42:24.716903 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:24.721953 env[1342]: time="2024-02-09T19:42:24.721861576Z" level=info msg="RemoveContainer for \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\"" Feb 9 19:42:24.723767 kubelet[2444]: I0209 19:42:24.723366 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb41251-cb97-44c7-8ba0-e945d7b32396-kube-api-access-gmlmv" (OuterVolumeSpecName: "kube-api-access-gmlmv") pod "bbb41251-cb97-44c7-8ba0-e945d7b32396" (UID: "bbb41251-cb97-44c7-8ba0-e945d7b32396"). InnerVolumeSpecName "kube-api-access-gmlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:42:24.724716 kubelet[2444]: I0209 19:42:24.724684 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-kube-api-access-gdrql" (OuterVolumeSpecName: "kube-api-access-gdrql") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "kube-api-access-gdrql". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:42:24.726789 kubelet[2444]: I0209 19:42:24.726733 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:42:24.728907 kubelet[2444]: I0209 19:42:24.728879 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26dce5ed-fc83-4035-a080-496f91ca8608-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "26dce5ed-fc83-4035-a080-496f91ca8608" (UID: "26dce5ed-fc83-4035-a080-496f91ca8608"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:42:24.733768 env[1342]: time="2024-02-09T19:42:24.733724162Z" level=info msg="RemoveContainer for \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\" returns successfully" Feb 9 19:42:24.734065 kubelet[2444]: I0209 19:42:24.734043 2444 scope.go:117] "RemoveContainer" containerID="194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0" Feb 9 19:42:24.735474 env[1342]: time="2024-02-09T19:42:24.735442074Z" level=info msg="RemoveContainer for \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\"" Feb 9 19:42:24.759209 env[1342]: time="2024-02-09T19:42:24.759154045Z" level=info msg="RemoveContainer for \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\" returns successfully" Feb 9 19:42:24.759652 kubelet[2444]: I0209 19:42:24.759617 2444 scope.go:117] "RemoveContainer" containerID="dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a" Feb 9 19:42:24.761076 env[1342]: time="2024-02-09T19:42:24.761038758Z" level=info msg="RemoveContainer for \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\"" Feb 9 19:42:24.772040 env[1342]: time="2024-02-09T19:42:24.771986837Z" level=info msg="RemoveContainer for \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\" returns successfully" Feb 9 19:42:24.772321 kubelet[2444]: I0209 19:42:24.772285 2444 scope.go:117] "RemoveContainer" containerID="7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd" Feb 9 19:42:24.773661 env[1342]: time="2024-02-09T19:42:24.773619349Z" level=info msg="RemoveContainer for \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\"" Feb 9 19:42:24.782224 env[1342]: time="2024-02-09T19:42:24.782172810Z" level=info msg="RemoveContainer for \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\" returns successfully" Feb 9 19:42:24.782467 kubelet[2444]: I0209 19:42:24.782444 2444 scope.go:117] "RemoveContainer" containerID="dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0" Feb 9 19:42:24.782808 env[1342]: time="2024-02-09T19:42:24.782743114Z" level=error msg="ContainerStatus for \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\": not found" Feb 9 19:42:24.782990 kubelet[2444]: E0209 19:42:24.782962 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\": not found" containerID="dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0" Feb 9 19:42:24.783069 kubelet[2444]: I0209 19:42:24.783008 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0"} err="failed to get container status \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd2b658d708b3d74f30ce4ed42dd7bf41296c979691ab83d2ed32d4fe631ada0\": not found" Feb 9 19:42:24.783069 kubelet[2444]: I0209 19:42:24.783029 2444 scope.go:117] "RemoveContainer" containerID="f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960" Feb 9 19:42:24.783275 env[1342]: time="2024-02-09T19:42:24.783221518Z" level=error msg="ContainerStatus for \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\": not found" Feb 9 19:42:24.783386 kubelet[2444]: E0209 19:42:24.783368 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\": not found" containerID="f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960" Feb 9 19:42:24.783459 kubelet[2444]: I0209 19:42:24.783402 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960"} err="failed to get container status \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\": rpc error: code = NotFound desc = an error occurred when try to find container \"f3d01a47d2629dbc1b55cd334274809892dc2f097e50709871c960e3c33f4960\": not found" Feb 9 19:42:24.783459 kubelet[2444]: I0209 19:42:24.783415 2444 scope.go:117] "RemoveContainer" containerID="194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0" Feb 9 19:42:24.783671 env[1342]: time="2024-02-09T19:42:24.783624221Z" level=error msg="ContainerStatus for \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\": not found" Feb 9 19:42:24.783807 kubelet[2444]: E0209 19:42:24.783762 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\": not found" containerID="194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0" Feb 9 19:42:24.783807 kubelet[2444]: I0209 19:42:24.783793 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0"} err="failed to get container status \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"194c8919537c3e655e6b4f5dd2d799a4fb061085ee8dd703e4331902a7e317b0\": not found" Feb 9 19:42:24.783807 kubelet[2444]: I0209 19:42:24.783805 2444 scope.go:117] "RemoveContainer" containerID="dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a" Feb 9 19:42:24.784036 env[1342]: time="2024-02-09T19:42:24.783988323Z" level=error msg="ContainerStatus for \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\": not found" Feb 9 19:42:24.784147 kubelet[2444]: E0209 19:42:24.784129 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\": not found" containerID="dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a" Feb 9 19:42:24.784219 kubelet[2444]: I0209 19:42:24.784160 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a"} err="failed to get container status \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc632517a663093e87ab79aa0f7d5ce804524a798b76f575e7849b053297859a\": not found" Feb 9 19:42:24.784219 kubelet[2444]: I0209 19:42:24.784173 2444 scope.go:117] "RemoveContainer" containerID="7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd" Feb 9 19:42:24.784394 env[1342]: time="2024-02-09T19:42:24.784349226Z" level=error msg="ContainerStatus for \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\": not found" Feb 9 19:42:24.784497 kubelet[2444]: E0209 19:42:24.784479 2444 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\": not found" containerID="7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd" Feb 9 19:42:24.784581 kubelet[2444]: I0209 19:42:24.784510 2444 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd"} err="failed to get container status \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"7021152fb9ebf2e6590121be253445031fde73ea125832f410b636f88c96e5dd\": not found" Feb 9 19:42:24.812067 kubelet[2444]: I0209 19:42:24.812019 2444 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-bpf-maps\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812067 kubelet[2444]: I0209 19:42:24.812064 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-cgroup\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812067 kubelet[2444]: I0209 19:42:24.812083 2444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812104 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbb41251-cb97-44c7-8ba0-e945d7b32396-cilium-config-path\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812121 2444 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gmlmv\" (UniqueName: \"kubernetes.io/projected/bbb41251-cb97-44c7-8ba0-e945d7b32396-kube-api-access-gmlmv\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812138 2444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-host-proc-sys-net\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812154 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-config-path\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812170 2444 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-xtables-lock\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812186 2444 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gdrql\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-kube-api-access-gdrql\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812201 2444 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-hostproc\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812413 kubelet[2444]: I0209 19:42:24.812220 2444 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26dce5ed-fc83-4035-a080-496f91ca8608-hubble-tls\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812768 kubelet[2444]: I0209 19:42:24.812234 2444 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cni-path\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812768 kubelet[2444]: I0209 19:42:24.812250 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-cilium-run\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812768 kubelet[2444]: I0209 19:42:24.812270 2444 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-etc-cni-netd\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812768 kubelet[2444]: I0209 19:42:24.812286 2444 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26dce5ed-fc83-4035-a080-496f91ca8608-lib-modules\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.812768 kubelet[2444]: I0209 19:42:24.812303 2444 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26dce5ed-fc83-4035-a080-496f91ca8608-clustermesh-secrets\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:24.977986 systemd[1]: Removed slice kubepods-besteffort-podbbb41251_cb97_44c7_8ba0_e945d7b32396.slice. Feb 9 19:42:25.055506 env[1342]: time="2024-02-09T19:42:25.055443374Z" level=info msg="StopPodSandbox for \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\"" Feb 9 19:42:25.055762 env[1342]: time="2024-02-09T19:42:25.055593175Z" level=info msg="TearDown network for sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" successfully" Feb 9 19:42:25.055762 env[1342]: time="2024-02-09T19:42:25.055650176Z" level=info msg="StopPodSandbox for \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" returns successfully" Feb 9 19:42:25.056407 env[1342]: time="2024-02-09T19:42:25.056367481Z" level=info msg="RemovePodSandbox for \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\"" Feb 9 19:42:25.056523 env[1342]: time="2024-02-09T19:42:25.056406181Z" level=info msg="Forcibly stopping sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\"" Feb 9 19:42:25.056523 env[1342]: time="2024-02-09T19:42:25.056489482Z" level=info msg="TearDown network for sandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" successfully" Feb 9 19:42:25.070370 env[1342]: time="2024-02-09T19:42:25.070313481Z" level=info msg="RemovePodSandbox \"31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f\" returns successfully" Feb 9 19:42:25.071325 env[1342]: time="2024-02-09T19:42:25.071285088Z" level=info msg="StopPodSandbox for \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\"" Feb 9 19:42:25.071465 env[1342]: time="2024-02-09T19:42:25.071392488Z" level=info msg="TearDown network for sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" successfully" Feb 9 19:42:25.071465 env[1342]: time="2024-02-09T19:42:25.071440489Z" level=info msg="StopPodSandbox for \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" returns successfully" Feb 9 19:42:25.071824 env[1342]: time="2024-02-09T19:42:25.071794091Z" level=info msg="RemovePodSandbox for \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\"" Feb 9 19:42:25.071912 env[1342]: time="2024-02-09T19:42:25.071825192Z" level=info msg="Forcibly stopping sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\"" Feb 9 19:42:25.071968 env[1342]: time="2024-02-09T19:42:25.071918292Z" level=info msg="TearDown network for sandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" successfully" Feb 9 19:42:25.080423 kubelet[2444]: I0209 19:42:25.080380 2444 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="bbb41251-cb97-44c7-8ba0-e945d7b32396" path="/var/lib/kubelet/pods/bbb41251-cb97-44c7-8ba0-e945d7b32396/volumes" Feb 9 19:42:25.085116 systemd[1]: Removed slice kubepods-burstable-pod26dce5ed_fc83_4035_a080_496f91ca8608.slice. Feb 9 19:42:25.085262 systemd[1]: kubepods-burstable-pod26dce5ed_fc83_4035_a080_496f91ca8608.slice: Consumed 7.825s CPU time. Feb 9 19:42:25.149203 env[1342]: time="2024-02-09T19:42:25.149138845Z" level=info msg="RemovePodSandbox \"dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06\" returns successfully" Feb 9 19:42:25.219875 kubelet[2444]: E0209 19:42:25.219835 2444 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:42:25.396493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06-rootfs.mount: Deactivated successfully. Feb 9 19:42:25.396638 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dddca052de45d3b1c13bf3dda20eaa7e3940f9f91eb7e5cdd5dfab813bb65c06-shm.mount: Deactivated successfully. Feb 9 19:42:25.396730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f-rootfs.mount: Deactivated successfully. Feb 9 19:42:25.396806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31e995dcabfbdc06514fe05cad1ad5c7b59f87a9cdbfe84510211960fdd7aa1f-shm.mount: Deactivated successfully. Feb 9 19:42:25.396890 systemd[1]: var-lib-kubelet-pods-bbb41251\x2dcb97\x2d44c7\x2d8ba0\x2de945d7b32396-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgmlmv.mount: Deactivated successfully. Feb 9 19:42:25.396995 systemd[1]: var-lib-kubelet-pods-26dce5ed\x2dfc83\x2d4035\x2da080\x2d496f91ca8608-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdrql.mount: Deactivated successfully. Feb 9 19:42:25.397073 systemd[1]: var-lib-kubelet-pods-26dce5ed\x2dfc83\x2d4035\x2da080\x2d496f91ca8608-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:42:25.397149 systemd[1]: var-lib-kubelet-pods-26dce5ed\x2dfc83\x2d4035\x2da080\x2d496f91ca8608-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:42:26.422294 sshd[4014]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:26.426263 systemd[1]: sshd@20-10.200.8.13:22-10.200.12.6:34340.service: Deactivated successfully. Feb 9 19:42:26.427401 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:42:26.428371 systemd-logind[1327]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:42:26.429443 systemd-logind[1327]: Removed session 23. Feb 9 19:42:26.528444 systemd[1]: Started sshd@21-10.200.8.13:22-10.200.12.6:34342.service. Feb 9 19:42:27.081478 kubelet[2444]: I0209 19:42:27.080883 2444 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" path="/var/lib/kubelet/pods/26dce5ed-fc83-4035-a080-496f91ca8608/volumes" Feb 9 19:42:27.146878 sshd[4181]: Accepted publickey for core from 10.200.12.6 port 34342 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:27.148680 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:27.158461 systemd[1]: Started session-24.scope. Feb 9 19:42:27.159173 systemd-logind[1327]: New session 24 of user core. Feb 9 19:42:28.024659 kubelet[2444]: I0209 19:42:28.024591 2444 topology_manager.go:215] "Topology Admit Handler" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" podNamespace="kube-system" podName="cilium-rqldm" Feb 9 19:42:28.024999 kubelet[2444]: E0209 19:42:28.024982 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="apply-sysctl-overwrites" Feb 9 19:42:28.025128 kubelet[2444]: E0209 19:42:28.025116 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="mount-bpf-fs" Feb 9 19:42:28.025241 kubelet[2444]: E0209 19:42:28.025230 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="clean-cilium-state" Feb 9 19:42:28.025338 kubelet[2444]: E0209 19:42:28.025327 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="cilium-agent" Feb 9 19:42:28.025428 kubelet[2444]: E0209 19:42:28.025418 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="mount-cgroup" Feb 9 19:42:28.025524 kubelet[2444]: E0209 19:42:28.025515 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bbb41251-cb97-44c7-8ba0-e945d7b32396" containerName="cilium-operator" Feb 9 19:42:28.025659 kubelet[2444]: I0209 19:42:28.025645 2444 memory_manager.go:346] "RemoveStaleState removing state" podUID="26dce5ed-fc83-4035-a080-496f91ca8608" containerName="cilium-agent" Feb 9 19:42:28.025766 kubelet[2444]: I0209 19:42:28.025755 2444 memory_manager.go:346] "RemoveStaleState removing state" podUID="bbb41251-cb97-44c7-8ba0-e945d7b32396" containerName="cilium-operator" Feb 9 19:42:28.033746 systemd[1]: Created slice kubepods-burstable-pod3bcf1c61_dd55_491d_a7a7_7304bb0b2f7a.slice. Feb 9 19:42:28.128326 sshd[4181]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130237 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-kernel\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130314 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-xtables-lock\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130350 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-cgroup\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130385 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cni-path\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130421 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hostproc\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.131670 kubelet[2444]: I0209 19:42:28.130457 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hubble-tls\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132460 kubelet[2444]: I0209 19:42:28.130494 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-ipsec-secrets\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132460 kubelet[2444]: I0209 19:42:28.130531 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-clustermesh-secrets\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132460 kubelet[2444]: I0209 19:42:28.130583 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-run\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132460 kubelet[2444]: I0209 19:42:28.130616 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-net\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132460 kubelet[2444]: I0209 19:42:28.130650 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-etc-cni-netd\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132468 systemd[1]: sshd@21-10.200.8.13:22-10.200.12.6:34342.service: Deactivated successfully. Feb 9 19:42:28.132910 kubelet[2444]: I0209 19:42:28.130683 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-config-path\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132910 kubelet[2444]: I0209 19:42:28.130717 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8v25\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-kube-api-access-c8v25\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132910 kubelet[2444]: I0209 19:42:28.130754 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-lib-modules\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.132910 kubelet[2444]: I0209 19:42:28.130793 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-bpf-maps\") pod \"cilium-rqldm\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " pod="kube-system/cilium-rqldm" Feb 9 19:42:28.133671 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:42:28.134388 systemd-logind[1327]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:42:28.135309 systemd-logind[1327]: Removed session 24. Feb 9 19:42:28.233041 systemd[1]: Started sshd@22-10.200.8.13:22-10.200.12.6:52520.service. Feb 9 19:42:28.341013 env[1342]: time="2024-02-09T19:42:28.340260153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqldm,Uid:3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a,Namespace:kube-system,Attempt:0,}" Feb 9 19:42:28.374359 env[1342]: time="2024-02-09T19:42:28.374131392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:42:28.374359 env[1342]: time="2024-02-09T19:42:28.374175392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:42:28.374359 env[1342]: time="2024-02-09T19:42:28.374188392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:42:28.374711 env[1342]: time="2024-02-09T19:42:28.374412694Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83 pid=4205 runtime=io.containerd.runc.v2 Feb 9 19:42:28.388857 systemd[1]: Started cri-containerd-4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83.scope. Feb 9 19:42:28.416566 env[1342]: time="2024-02-09T19:42:28.416503191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rqldm,Uid:3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\"" Feb 9 19:42:28.420262 env[1342]: time="2024-02-09T19:42:28.420211017Z" level=info msg="CreateContainer within sandbox \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:42:28.464416 env[1342]: time="2024-02-09T19:42:28.464357929Z" level=info msg="CreateContainer within sandbox \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\"" Feb 9 19:42:28.465708 env[1342]: time="2024-02-09T19:42:28.465672238Z" level=info msg="StartContainer for \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\"" Feb 9 19:42:28.485887 systemd[1]: Started cri-containerd-00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc.scope. Feb 9 19:42:28.500610 systemd[1]: cri-containerd-00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc.scope: Deactivated successfully. Feb 9 19:42:28.564988 env[1342]: time="2024-02-09T19:42:28.564915338Z" level=info msg="shim disconnected" id=00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc Feb 9 19:42:28.564988 env[1342]: time="2024-02-09T19:42:28.564988939Z" level=warning msg="cleaning up after shim disconnected" id=00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc namespace=k8s.io Feb 9 19:42:28.564988 env[1342]: time="2024-02-09T19:42:28.565000739Z" level=info msg="cleaning up dead shim" Feb 9 19:42:28.574008 env[1342]: time="2024-02-09T19:42:28.573948402Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4265 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:42:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:42:28.574365 env[1342]: time="2024-02-09T19:42:28.574245004Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Feb 9 19:42:28.576442 env[1342]: time="2024-02-09T19:42:28.576397219Z" level=error msg="Failed to pipe stderr of container \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\"" error="reading from a closed fifo" Feb 9 19:42:28.577669 env[1342]: time="2024-02-09T19:42:28.577622628Z" level=error msg="Failed to pipe stdout of container \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\"" error="reading from a closed fifo" Feb 9 19:42:28.582209 env[1342]: time="2024-02-09T19:42:28.582140460Z" level=error msg="StartContainer for \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:42:28.582499 kubelet[2444]: E0209 19:42:28.582474 2444 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc" Feb 9 19:42:28.582684 kubelet[2444]: E0209 19:42:28.582665 2444 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:42:28.582684 kubelet[2444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:42:28.582684 kubelet[2444]: rm /hostbin/cilium-mount Feb 9 19:42:28.582831 kubelet[2444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-c8v25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rqldm_kube-system(3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:42:28.582831 kubelet[2444]: E0209 19:42:28.582727 2444 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rqldm" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" Feb 9 19:42:28.699228 env[1342]: time="2024-02-09T19:42:28.699177286Z" level=info msg="CreateContainer within sandbox \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:42:28.735951 env[1342]: time="2024-02-09T19:42:28.735895245Z" level=info msg="CreateContainer within sandbox \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\"" Feb 9 19:42:28.737037 env[1342]: time="2024-02-09T19:42:28.737002553Z" level=info msg="StartContainer for \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\"" Feb 9 19:42:28.772297 systemd[1]: Started cri-containerd-ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5.scope. Feb 9 19:42:28.803366 systemd[1]: cri-containerd-ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5.scope: Deactivated successfully. Feb 9 19:42:28.822756 env[1342]: time="2024-02-09T19:42:28.822687258Z" level=info msg="shim disconnected" id=ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5 Feb 9 19:42:28.823072 env[1342]: time="2024-02-09T19:42:28.823048960Z" level=warning msg="cleaning up after shim disconnected" id=ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5 namespace=k8s.io Feb 9 19:42:28.823222 env[1342]: time="2024-02-09T19:42:28.823206661Z" level=info msg="cleaning up dead shim" Feb 9 19:42:28.848072 env[1342]: time="2024-02-09T19:42:28.848003636Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4301 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:42:28Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:42:28.848652 env[1342]: time="2024-02-09T19:42:28.848570640Z" level=error msg="copy shim log" error="read /proc/self/fd/35: file already closed" Feb 9 19:42:28.850643 env[1342]: time="2024-02-09T19:42:28.850598355Z" level=error msg="Failed to pipe stderr of container \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\"" error="reading from a closed fifo" Feb 9 19:42:28.850730 env[1342]: time="2024-02-09T19:42:28.848810342Z" level=error msg="Failed to pipe stdout of container \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\"" error="reading from a closed fifo" Feb 9 19:42:28.857704 env[1342]: time="2024-02-09T19:42:28.857649504Z" level=error msg="StartContainer for \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:42:28.858153 kubelet[2444]: E0209 19:42:28.858122 2444 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5" Feb 9 19:42:28.858302 kubelet[2444]: E0209 19:42:28.858286 2444 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:42:28.858302 kubelet[2444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:42:28.858302 kubelet[2444]: rm /hostbin/cilium-mount Feb 9 19:42:28.858302 kubelet[2444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-c8v25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rqldm_kube-system(3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:42:28.858584 kubelet[2444]: E0209 19:42:28.858359 2444 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rqldm" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" Feb 9 19:42:28.902748 sshd[4191]: Accepted publickey for core from 10.200.12.6 port 52520 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:28.904260 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:28.909394 systemd-logind[1327]: New session 25 of user core. Feb 9 19:42:28.909944 systemd[1]: Started session-25.scope. Feb 9 19:42:29.101104 kubelet[2444]: I0209 19:42:29.099672 2444 setters.go:552] "Node became not ready" node="ci-3510.3.2-a-4c52a92a5f" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:42:29Z","lastTransitionTime":"2024-02-09T19:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:42:29.430109 sshd[4191]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:29.433915 systemd[1]: sshd@22-10.200.8.13:22-10.200.12.6:52520.service: Deactivated successfully. Feb 9 19:42:29.435166 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:42:29.436112 systemd-logind[1327]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:42:29.437038 systemd-logind[1327]: Removed session 25. Feb 9 19:42:29.535970 systemd[1]: Started sshd@23-10.200.8.13:22-10.200.12.6:52530.service. Feb 9 19:42:29.697366 kubelet[2444]: I0209 19:42:29.697232 2444 scope.go:117] "RemoveContainer" containerID="00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc" Feb 9 19:42:29.706917 env[1342]: time="2024-02-09T19:42:29.698383415Z" level=info msg="StopPodSandbox for \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\"" Feb 9 19:42:29.706917 env[1342]: time="2024-02-09T19:42:29.698467815Z" level=info msg="Container to stop \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:29.706917 env[1342]: time="2024-02-09T19:42:29.698488715Z" level=info msg="Container to stop \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:42:29.706917 env[1342]: time="2024-02-09T19:42:29.705364964Z" level=info msg="RemoveContainer for \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\"" Feb 9 19:42:29.704082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83-shm.mount: Deactivated successfully. Feb 9 19:42:29.723780 systemd[1]: cri-containerd-4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83.scope: Deactivated successfully. Feb 9 19:42:29.727346 env[1342]: time="2024-02-09T19:42:29.727297118Z" level=info msg="RemoveContainer for \"00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc\" returns successfully" Feb 9 19:42:29.757392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83-rootfs.mount: Deactivated successfully. Feb 9 19:42:29.773718 env[1342]: time="2024-02-09T19:42:29.773464442Z" level=info msg="shim disconnected" id=4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83 Feb 9 19:42:29.774072 env[1342]: time="2024-02-09T19:42:29.774028046Z" level=warning msg="cleaning up after shim disconnected" id=4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83 namespace=k8s.io Feb 9 19:42:29.774212 env[1342]: time="2024-02-09T19:42:29.774194047Z" level=info msg="cleaning up dead shim" Feb 9 19:42:29.792660 env[1342]: time="2024-02-09T19:42:29.792599976Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4344 runtime=io.containerd.runc.v2\n" Feb 9 19:42:29.793195 env[1342]: time="2024-02-09T19:42:29.793154180Z" level=info msg="TearDown network for sandbox \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" successfully" Feb 9 19:42:29.793195 env[1342]: time="2024-02-09T19:42:29.793189681Z" level=info msg="StopPodSandbox for \"4860982b647c4627f19b8ca1f2cad0fa83c7dbffef5a163e984a44aa30b63e83\" returns successfully" Feb 9 19:42:29.941132 kubelet[2444]: I0209 19:42:29.940540 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-clustermesh-secrets\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.941501 kubelet[2444]: I0209 19:42:29.941482 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-config-path\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.941724 kubelet[2444]: I0209 19:42:29.941696 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-lib-modules\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.941867 kubelet[2444]: I0209 19:42:29.941853 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hostproc\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942022 kubelet[2444]: I0209 19:42:29.942007 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hubble-tls\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942179 kubelet[2444]: I0209 19:42:29.942165 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-bpf-maps\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942337 kubelet[2444]: I0209 19:42:29.942321 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-kernel\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942505 kubelet[2444]: I0209 19:42:29.942489 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-xtables-lock\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942679 kubelet[2444]: I0209 19:42:29.942662 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-run\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942831 kubelet[2444]: I0209 19:42:29.942815 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-etc-cni-netd\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.942984 kubelet[2444]: I0209 19:42:29.942971 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-ipsec-secrets\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.943129 kubelet[2444]: I0209 19:42:29.943115 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-net\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.943278 kubelet[2444]: I0209 19:42:29.943266 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8v25\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-kube-api-access-c8v25\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.943412 kubelet[2444]: I0209 19:42:29.943401 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-cgroup\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.943537 kubelet[2444]: I0209 19:42:29.943524 2444 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cni-path\") pod \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\" (UID: \"3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a\") " Feb 9 19:42:29.943739 kubelet[2444]: I0209 19:42:29.943719 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cni-path" (OuterVolumeSpecName: "cni-path") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.944645 kubelet[2444]: I0209 19:42:29.944611 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.944774 kubelet[2444]: I0209 19:42:29.944662 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hostproc" (OuterVolumeSpecName: "hostproc") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.944936 kubelet[2444]: I0209 19:42:29.944902 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.945083 kubelet[2444]: I0209 19:42:29.945062 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.945232 kubelet[2444]: I0209 19:42:29.945211 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.945376 kubelet[2444]: I0209 19:42:29.945355 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.945517 kubelet[2444]: I0209 19:42:29.945498 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.945711 kubelet[2444]: I0209 19:42:29.945690 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.946520 kubelet[2444]: I0209 19:42:29.946494 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:42:29.950312 systemd[1]: var-lib-kubelet-pods-3bcf1c61\x2ddd55\x2d491d\x2da7a7\x2d7304bb0b2f7a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:42:29.952255 kubelet[2444]: I0209 19:42:29.952232 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:42:29.954429 kubelet[2444]: I0209 19:42:29.954403 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:42:29.959034 systemd[1]: var-lib-kubelet-pods-3bcf1c61\x2ddd55\x2d491d\x2da7a7\x2d7304bb0b2f7a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:42:29.960923 kubelet[2444]: I0209 19:42:29.960884 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:42:29.961736 kubelet[2444]: I0209 19:42:29.961708 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:42:29.962005 kubelet[2444]: I0209 19:42:29.961980 2444 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-kube-api-access-c8v25" (OuterVolumeSpecName: "kube-api-access-c8v25") pod "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" (UID: "3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a"). InnerVolumeSpecName "kube-api-access-c8v25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:42:30.044511 kubelet[2444]: I0209 19:42:30.044450 2444 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-xtables-lock\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044511 kubelet[2444]: I0209 19:42:30.044499 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-run\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044511 kubelet[2444]: I0209 19:42:30.044516 2444 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-etc-cni-netd\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044511 kubelet[2444]: I0209 19:42:30.044531 2444 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-bpf-maps\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044574 2444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044594 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044614 2444 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-host-proc-sys-net\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044631 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-cgroup\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044646 2444 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cni-path\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044663 2444 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-c8v25\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-kube-api-access-c8v25\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044680 2444 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-clustermesh-secrets\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044722 2444 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-cilium-config-path\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044744 2444 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-lib-modules\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044764 2444 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hubble-tls\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.044922 kubelet[2444]: I0209 19:42:30.044781 2444 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a-hostproc\") on node \"ci-3510.3.2-a-4c52a92a5f\" DevicePath \"\"" Feb 9 19:42:30.153018 sshd[4323]: Accepted publickey for core from 10.200.12.6 port 52530 ssh2: RSA SHA256:DU+Yi2nD7nw8jYgdAj8DCdA8ysRsrSuDu1TpdDncLY8 Feb 9 19:42:30.154834 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:42:30.159948 systemd[1]: Started session-26.scope. Feb 9 19:42:30.160398 systemd-logind[1327]: New session 26 of user core. Feb 9 19:42:30.221866 kubelet[2444]: E0209 19:42:30.221700 2444 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:42:30.249944 systemd[1]: var-lib-kubelet-pods-3bcf1c61\x2ddd55\x2d491d\x2da7a7\x2d7304bb0b2f7a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc8v25.mount: Deactivated successfully. Feb 9 19:42:30.250099 systemd[1]: var-lib-kubelet-pods-3bcf1c61\x2ddd55\x2d491d\x2da7a7\x2d7304bb0b2f7a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:42:30.700820 kubelet[2444]: I0209 19:42:30.700769 2444 scope.go:117] "RemoveContainer" containerID="ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5" Feb 9 19:42:30.706730 systemd[1]: Removed slice kubepods-burstable-pod3bcf1c61_dd55_491d_a7a7_7304bb0b2f7a.slice. Feb 9 19:42:30.708897 env[1342]: time="2024-02-09T19:42:30.708816489Z" level=info msg="RemoveContainer for \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\"" Feb 9 19:42:30.716849 env[1342]: time="2024-02-09T19:42:30.716794145Z" level=info msg="RemoveContainer for \"ca81716c540d0861115aba76b7b8289c7cfb69497def0d1ecc5924aa813650a5\" returns successfully" Feb 9 19:42:30.769840 kubelet[2444]: I0209 19:42:30.769793 2444 topology_manager.go:215] "Topology Admit Handler" podUID="38406746-1c76-497e-b145-6afaa5fb1646" podNamespace="kube-system" podName="cilium-4gfll" Feb 9 19:42:30.770171 kubelet[2444]: E0209 19:42:30.770150 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" containerName="mount-cgroup" Feb 9 19:42:30.770333 kubelet[2444]: E0209 19:42:30.770288 2444 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" containerName="mount-cgroup" Feb 9 19:42:30.770443 kubelet[2444]: I0209 19:42:30.770339 2444 memory_manager.go:346] "RemoveStaleState removing state" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" containerName="mount-cgroup" Feb 9 19:42:30.770443 kubelet[2444]: I0209 19:42:30.770350 2444 memory_manager.go:346] "RemoveStaleState removing state" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" containerName="mount-cgroup" Feb 9 19:42:30.776788 systemd[1]: Created slice kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice. Feb 9 19:42:30.950520 kubelet[2444]: I0209 19:42:30.950474 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/38406746-1c76-497e-b145-6afaa5fb1646-cilium-ipsec-secrets\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950520 kubelet[2444]: I0209 19:42:30.950534 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-lib-modules\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950584 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/38406746-1c76-497e-b145-6afaa5fb1646-clustermesh-secrets\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950626 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-hostproc\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950652 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-cni-path\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950676 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-etc-cni-netd\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950715 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-host-proc-sys-kernel\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950785 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-cilium-cgroup\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.950818 kubelet[2444]: I0209 19:42:30.950814 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38406746-1c76-497e-b145-6afaa5fb1646-cilium-config-path\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.950858 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-host-proc-sys-net\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.950889 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/38406746-1c76-497e-b145-6afaa5fb1646-hubble-tls\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.950938 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-cilium-run\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.950967 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnqnf\" (UniqueName: \"kubernetes.io/projected/38406746-1c76-497e-b145-6afaa5fb1646-kube-api-access-fnqnf\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.951011 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-bpf-maps\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:30.951179 kubelet[2444]: I0209 19:42:30.951044 2444 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/38406746-1c76-497e-b145-6afaa5fb1646-xtables-lock\") pod \"cilium-4gfll\" (UID: \"38406746-1c76-497e-b145-6afaa5fb1646\") " pod="kube-system/cilium-4gfll" Feb 9 19:42:31.080746 kubelet[2444]: I0209 19:42:31.080699 2444 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a" path="/var/lib/kubelet/pods/3bcf1c61-dd55-491d-a7a7-7304bb0b2f7a/volumes" Feb 9 19:42:31.381035 env[1342]: time="2024-02-09T19:42:31.380889776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gfll,Uid:38406746-1c76-497e-b145-6afaa5fb1646,Namespace:kube-system,Attempt:0,}" Feb 9 19:42:31.415090 env[1342]: time="2024-02-09T19:42:31.415017714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:42:31.415090 env[1342]: time="2024-02-09T19:42:31.415058114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:42:31.415360 env[1342]: time="2024-02-09T19:42:31.415070914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:42:31.415644 env[1342]: time="2024-02-09T19:42:31.415597318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8 pid=4381 runtime=io.containerd.runc.v2 Feb 9 19:42:31.441544 systemd[1]: run-containerd-runc-k8s.io-278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8-runc.a3EQbS.mount: Deactivated successfully. Feb 9 19:42:31.445161 systemd[1]: Started cri-containerd-278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8.scope. Feb 9 19:42:31.471905 env[1342]: time="2024-02-09T19:42:31.471851109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4gfll,Uid:38406746-1c76-497e-b145-6afaa5fb1646,Namespace:kube-system,Attempt:0,} returns sandbox id \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\"" Feb 9 19:42:31.475323 env[1342]: time="2024-02-09T19:42:31.475272233Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:42:31.505997 env[1342]: time="2024-02-09T19:42:31.505929747Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1\"" Feb 9 19:42:31.508082 env[1342]: time="2024-02-09T19:42:31.506757552Z" level=info msg="StartContainer for \"021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1\"" Feb 9 19:42:31.525677 systemd[1]: Started cri-containerd-021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1.scope. Feb 9 19:42:31.560429 systemd[1]: cri-containerd-021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1.scope: Deactivated successfully. Feb 9 19:42:31.575970 env[1342]: time="2024-02-09T19:42:31.575913934Z" level=info msg="StartContainer for \"021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1\" returns successfully" Feb 9 19:42:31.626830 env[1342]: time="2024-02-09T19:42:31.626769388Z" level=info msg="shim disconnected" id=021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1 Feb 9 19:42:31.626830 env[1342]: time="2024-02-09T19:42:31.626841588Z" level=warning msg="cleaning up after shim disconnected" id=021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1 namespace=k8s.io Feb 9 19:42:31.627204 env[1342]: time="2024-02-09T19:42:31.626855988Z" level=info msg="cleaning up dead shim" Feb 9 19:42:31.636588 env[1342]: time="2024-02-09T19:42:31.636419955Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4470 runtime=io.containerd.runc.v2\n" Feb 9 19:42:31.671072 kubelet[2444]: W0209 19:42:31.670983 2444 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bcf1c61_dd55_491d_a7a7_7304bb0b2f7a.slice/cri-containerd-00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc.scope WatchSource:0}: container "00aa2eb98c9221e780a2da9df0beb5a66940fd1a1ffa0be604f03f7ae4c265cc" in namespace "k8s.io": not found Feb 9 19:42:31.713080 env[1342]: time="2024-02-09T19:42:31.713030188Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:42:31.749725 env[1342]: time="2024-02-09T19:42:31.749665143Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b\"" Feb 9 19:42:31.750460 env[1342]: time="2024-02-09T19:42:31.750424048Z" level=info msg="StartContainer for \"bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b\"" Feb 9 19:42:31.769708 systemd[1]: Started cri-containerd-bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b.scope. Feb 9 19:42:31.807027 systemd[1]: cri-containerd-bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b.scope: Deactivated successfully. Feb 9 19:42:31.810106 env[1342]: time="2024-02-09T19:42:31.809517260Z" level=info msg="StartContainer for \"bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b\" returns successfully" Feb 9 19:42:31.848160 env[1342]: time="2024-02-09T19:42:31.848090228Z" level=info msg="shim disconnected" id=bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b Feb 9 19:42:31.848160 env[1342]: time="2024-02-09T19:42:31.848161529Z" level=warning msg="cleaning up after shim disconnected" id=bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b namespace=k8s.io Feb 9 19:42:31.848160 env[1342]: time="2024-02-09T19:42:31.848176029Z" level=info msg="cleaning up dead shim" Feb 9 19:42:31.862467 env[1342]: time="2024-02-09T19:42:31.862408528Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4529 runtime=io.containerd.runc.v2\n" Feb 9 19:42:32.712449 env[1342]: time="2024-02-09T19:42:32.712397322Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:42:32.755073 env[1342]: time="2024-02-09T19:42:32.755012017Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a\"" Feb 9 19:42:32.755984 env[1342]: time="2024-02-09T19:42:32.755939723Z" level=info msg="StartContainer for \"3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a\"" Feb 9 19:42:32.783877 systemd[1]: Started cri-containerd-3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a.scope. Feb 9 19:42:32.817748 systemd[1]: cri-containerd-3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a.scope: Deactivated successfully. Feb 9 19:42:32.820386 env[1342]: time="2024-02-09T19:42:32.820334769Z" level=info msg="StartContainer for \"3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a\" returns successfully" Feb 9 19:42:32.853438 env[1342]: time="2024-02-09T19:42:32.853381998Z" level=info msg="shim disconnected" id=3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a Feb 9 19:42:32.853438 env[1342]: time="2024-02-09T19:42:32.853433399Z" level=warning msg="cleaning up after shim disconnected" id=3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a namespace=k8s.io Feb 9 19:42:32.853438 env[1342]: time="2024-02-09T19:42:32.853445199Z" level=info msg="cleaning up dead shim" Feb 9 19:42:32.862207 env[1342]: time="2024-02-09T19:42:32.862153359Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4590 runtime=io.containerd.runc.v2\n" Feb 9 19:42:33.406678 systemd[1]: run-containerd-runc-k8s.io-3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a-runc.O700jB.mount: Deactivated successfully. Feb 9 19:42:33.407070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a-rootfs.mount: Deactivated successfully. Feb 9 19:42:33.717599 env[1342]: time="2024-02-09T19:42:33.717455263Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:42:33.765848 env[1342]: time="2024-02-09T19:42:33.765790497Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129\"" Feb 9 19:42:33.766814 env[1342]: time="2024-02-09T19:42:33.766769403Z" level=info msg="StartContainer for \"d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129\"" Feb 9 19:42:33.797186 systemd[1]: Started cri-containerd-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129.scope. Feb 9 19:42:33.825062 systemd[1]: cri-containerd-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129.scope: Deactivated successfully. Feb 9 19:42:33.827026 env[1342]: time="2024-02-09T19:42:33.826937618Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice/cri-containerd-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129.scope/memory.events\": no such file or directory" Feb 9 19:42:33.832176 env[1342]: time="2024-02-09T19:42:33.832112054Z" level=info msg="StartContainer for \"d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129\" returns successfully" Feb 9 19:42:33.870652 env[1342]: time="2024-02-09T19:42:33.870597820Z" level=info msg="shim disconnected" id=d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129 Feb 9 19:42:33.870956 env[1342]: time="2024-02-09T19:42:33.870921422Z" level=warning msg="cleaning up after shim disconnected" id=d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129 namespace=k8s.io Feb 9 19:42:33.870956 env[1342]: time="2024-02-09T19:42:33.870942722Z" level=info msg="cleaning up dead shim" Feb 9 19:42:33.880773 env[1342]: time="2024-02-09T19:42:33.880713389Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4649 runtime=io.containerd.runc.v2\n" Feb 9 19:42:34.406754 systemd[1]: run-containerd-runc-k8s.io-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129-runc.yTzD2j.mount: Deactivated successfully. Feb 9 19:42:34.406922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129-rootfs.mount: Deactivated successfully. Feb 9 19:42:34.722830 env[1342]: time="2024-02-09T19:42:34.722699975Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:42:34.764959 env[1342]: time="2024-02-09T19:42:34.764890265Z" level=info msg="CreateContainer within sandbox \"278e69a1a0f7f9a173b53ff3f19c4993a406bd32922d6803164cdc0c0d2c6da8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1\"" Feb 9 19:42:34.765669 env[1342]: time="2024-02-09T19:42:34.765632770Z" level=info msg="StartContainer for \"e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1\"" Feb 9 19:42:34.783214 kubelet[2444]: W0209 19:42:34.783159 2444 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice/cri-containerd-021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1.scope WatchSource:0}: task 021fab901fae11f9af00993997f601526207172a5f4107abc53e20f3f7578de1 not found: not found Feb 9 19:42:34.797726 systemd[1]: Started cri-containerd-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1.scope. Feb 9 19:42:34.845398 env[1342]: time="2024-02-09T19:42:34.845326618Z" level=info msg="StartContainer for \"e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1\" returns successfully" Feb 9 19:42:35.305587 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:42:35.409028 systemd[1]: run-containerd-runc-k8s.io-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1-runc.pSTkLQ.mount: Deactivated successfully. Feb 9 19:42:36.709306 systemd[1]: run-containerd-runc-k8s.io-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1-runc.ViCnCy.mount: Deactivated successfully. Feb 9 19:42:37.891503 kubelet[2444]: W0209 19:42:37.891452 2444 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice/cri-containerd-bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b.scope WatchSource:0}: task bbc9acccdbd063f7644582775b25d870412c63b705b83a4fb26aa8365ed5ee6b not found: not found Feb 9 19:42:38.041325 systemd-networkd[1485]: lxc_health: Link UP Feb 9 19:42:38.054205 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:42:38.054027 systemd-networkd[1485]: lxc_health: Gained carrier Feb 9 19:42:38.896873 systemd[1]: run-containerd-runc-k8s.io-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1-runc.jjgC60.mount: Deactivated successfully. Feb 9 19:42:39.410338 kubelet[2444]: I0209 19:42:39.410295 2444 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4gfll" podStartSLOduration=9.410245513 podCreationTimestamp="2024-02-09 19:42:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:42:35.739611737 +0000 UTC m=+250.833638280" watchObservedRunningTime="2024-02-09 19:42:39.410245513 +0000 UTC m=+254.504272156" Feb 9 19:42:39.858788 systemd-networkd[1485]: lxc_health: Gained IPv6LL Feb 9 19:42:41.006073 kubelet[2444]: W0209 19:42:41.006024 2444 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice/cri-containerd-3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a.scope WatchSource:0}: task 3d91734d06226ad530fe9c5b7af6233be48aa5db213e1a6fdd8ce06776490e3a not found: not found Feb 9 19:42:41.086707 systemd[1]: run-containerd-runc-k8s.io-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1-runc.St6XBd.mount: Deactivated successfully. Feb 9 19:42:43.272372 systemd[1]: run-containerd-runc-k8s.io-e8ab819e2c7266d8ceea6b495c31af7dcd12d3fe7d2498c5c52b1a0e05654cd1-runc.796oCk.mount: Deactivated successfully. Feb 9 19:42:44.119901 kubelet[2444]: W0209 19:42:44.119843 2444 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38406746_1c76_497e_b145_6afaa5fb1646.slice/cri-containerd-d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129.scope WatchSource:0}: task d01c44ba5d5b89833b03f58edf60e1361b145a583a9eb4c9633d7af09a8b5129 not found: not found Feb 9 19:42:45.592599 sshd[4323]: pam_unix(sshd:session): session closed for user core Feb 9 19:42:45.596523 systemd[1]: sshd@23-10.200.8.13:22-10.200.12.6:52530.service: Deactivated successfully. Feb 9 19:42:45.597588 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:42:45.598443 systemd-logind[1327]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:42:45.599336 systemd-logind[1327]: Removed session 26. Feb 9 19:42:59.572074 systemd[1]: cri-containerd-c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a.scope: Deactivated successfully. Feb 9 19:42:59.572429 systemd[1]: cri-containerd-c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a.scope: Consumed 4.097s CPU time. Feb 9 19:42:59.594907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a-rootfs.mount: Deactivated successfully. Feb 9 19:42:59.620839 env[1342]: time="2024-02-09T19:42:59.620779792Z" level=info msg="shim disconnected" id=c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a Feb 9 19:42:59.620839 env[1342]: time="2024-02-09T19:42:59.620840493Z" level=warning msg="cleaning up after shim disconnected" id=c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a namespace=k8s.io Feb 9 19:42:59.621433 env[1342]: time="2024-02-09T19:42:59.620855293Z" level=info msg="cleaning up dead shim" Feb 9 19:42:59.629911 env[1342]: time="2024-02-09T19:42:59.629855049Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:42:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5355 runtime=io.containerd.runc.v2\n" Feb 9 19:42:59.636797 kubelet[2444]: E0209 19:42:59.636758 2444 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:42:59.781065 kubelet[2444]: I0209 19:42:59.779821 2444 scope.go:117] "RemoveContainer" containerID="c8c47882f27331eb087f9382755fb6a2cac7f6ddebe03f75ad9bda8004fcf91a" Feb 9 19:42:59.783452 env[1342]: time="2024-02-09T19:42:59.783404608Z" level=info msg="CreateContainer within sandbox \"76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:42:59.816561 env[1342]: time="2024-02-09T19:42:59.816489715Z" level=info msg="CreateContainer within sandbox \"76d5373a7cf78d0409b34dab4461673e51e2fcc1c233146faea3337466e37658\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3c7e4616b014b78f5172f888a4f9e052d22813b186c85ea4a1d43e11be19630e\"" Feb 9 19:42:59.817163 env[1342]: time="2024-02-09T19:42:59.817129919Z" level=info msg="StartContainer for \"3c7e4616b014b78f5172f888a4f9e052d22813b186c85ea4a1d43e11be19630e\"" Feb 9 19:42:59.842134 systemd[1]: Started cri-containerd-3c7e4616b014b78f5172f888a4f9e052d22813b186c85ea4a1d43e11be19630e.scope. Feb 9 19:42:59.902860 env[1342]: time="2024-02-09T19:42:59.902796854Z" level=info msg="StartContainer for \"3c7e4616b014b78f5172f888a4f9e052d22813b186c85ea4a1d43e11be19630e\" returns successfully" Feb 9 19:43:03.291796 kubelet[2444]: E0209 19:43:03.291503 2444 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.13:41936->10.200.8.10:2379: read: connection timed out" Feb 9 19:43:03.292977 systemd[1]: cri-containerd-8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777.scope: Deactivated successfully. Feb 9 19:43:03.293324 systemd[1]: cri-containerd-8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777.scope: Consumed 2.202s CPU time. Feb 9 19:43:03.316668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777-rootfs.mount: Deactivated successfully. Feb 9 19:43:03.342469 env[1342]: time="2024-02-09T19:43:03.342410586Z" level=info msg="shim disconnected" id=8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777 Feb 9 19:43:03.342469 env[1342]: time="2024-02-09T19:43:03.342466486Z" level=warning msg="cleaning up after shim disconnected" id=8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777 namespace=k8s.io Feb 9 19:43:03.343082 env[1342]: time="2024-02-09T19:43:03.342479186Z" level=info msg="cleaning up dead shim" Feb 9 19:43:03.351420 env[1342]: time="2024-02-09T19:43:03.351367941Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:43:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5418 runtime=io.containerd.runc.v2\n" Feb 9 19:43:03.790565 kubelet[2444]: I0209 19:43:03.790527 2444 scope.go:117] "RemoveContainer" containerID="8970e403e6515c54b0f0c1d36aab779cad946c2a9e27e6f4524faad0439fc777" Feb 9 19:43:03.792560 env[1342]: time="2024-02-09T19:43:03.792500761Z" level=info msg="CreateContainer within sandbox \"5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:43:03.834081 env[1342]: time="2024-02-09T19:43:03.834019217Z" level=info msg="CreateContainer within sandbox \"5caa3a18d9f48c1a430c81b984eed040114685ba70df35e6dc52dc86b4e3f369\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"af4b4280cfca61097c54bb0a1bac803c86131a460185abe48534ba023c8edede\"" Feb 9 19:43:03.834826 env[1342]: time="2024-02-09T19:43:03.834788721Z" level=info msg="StartContainer for \"af4b4280cfca61097c54bb0a1bac803c86131a460185abe48534ba023c8edede\"" Feb 9 19:43:03.861022 systemd[1]: Started cri-containerd-af4b4280cfca61097c54bb0a1bac803c86131a460185abe48534ba023c8edede.scope. Feb 9 19:43:03.916027 env[1342]: time="2024-02-09T19:43:03.915959022Z" level=info msg="StartContainer for \"af4b4280cfca61097c54bb0a1bac803c86131a460185abe48534ba023c8edede\" returns successfully" Feb 9 19:43:04.308630 kubelet[2444]: E0209 19:43:04.308459 2444 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-4c52a92a5f.17b2494608d6b38d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-4c52a92a5f", UID:"ffe9a06f978f0b72085ec7397b9225d3", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-4c52a92a5f"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 42, 53, 871666061, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 42, 53, 871666061, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-4c52a92a5f"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.13:41716->10.200.8.10:2379: read: connection timed out' (will not retry!) Feb 9 19:43:10.259671 kubelet[2444]: I0209 19:43:10.259630 2444 status_manager.go:853] "Failed to get status for pod" podUID="f13b63b48a3b536819b981c8780ce27e" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-4c52a92a5f" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.13:41850->10.200.8.10:2379: read: connection timed out" Feb 9 19:43:13.292572 kubelet[2444]: E0209 19:43:13.292481 2444 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-4c52a92a5f?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:43:14.061495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.074249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.087565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.101069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.114373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.127327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.127624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.138410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.138696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.149193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.149533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.159566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.159849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.171951 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.185903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.186211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.186356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.196994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.197346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.208755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.209091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.220380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.220717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.232284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.232596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.245412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.245736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.257050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.257346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.268336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.280192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.280318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.286253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.293042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.293187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.305237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.305638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.316650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.316988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.328081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.334296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.334595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.348085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.348342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.357132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.357427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.368167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.368460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.384871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.385156 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.385297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.395983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.396305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.407441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.415801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.416289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.432269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.432626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.432776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.443252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.443584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.453925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.465523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.465690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.465825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.482853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.483224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.483370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.494284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.539808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.540993 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.556867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.601339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.601567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.601711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.601841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.601956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.602099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.602227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.602352 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.602488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:43:14.602630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#203 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001