Feb 8 23:53:52.016479 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:53:52.016511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:53:52.016526 kernel: BIOS-provided physical RAM map: Feb 8 23:53:52.016537 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:53:52.016547 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:53:52.016557 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:53:52.016573 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:53:52.016584 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:53:52.016595 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:53:52.016605 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:53:52.016615 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:53:52.016625 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:53:52.016636 kernel: NX (Execute Disable) protection: active Feb 8 23:53:52.016647 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:53:52.016664 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Feb 8 23:53:52.016676 kernel: random: crng init done Feb 8 23:53:52.016687 kernel: SMBIOS 3.1.0 present. Feb 8 23:53:52.016699 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:53:52.016711 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:53:52.016723 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:53:52.016734 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:53:52.016746 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:53:52.016760 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:53:52.016771 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:53:52.016783 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:53:52.016795 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:53:52.016807 kernel: tsc: Detected 2593.906 MHz processor Feb 8 23:53:52.016820 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:53:52.016832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:53:52.016844 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:53:52.016855 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:53:52.016868 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:53:52.016883 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:53:52.016894 kernel: Using GB pages for direct mapping Feb 8 23:53:52.016906 kernel: Secure boot disabled Feb 8 23:53:52.016918 kernel: ACPI: Early table checksum verification disabled Feb 8 23:53:52.016930 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:53:52.016942 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.016954 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.016966 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:53:52.016986 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:53:52.016999 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017012 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017024 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017037 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017050 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017065 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017078 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:53:52.017091 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:53:52.017105 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:53:52.017117 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:53:52.017130 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:53:52.017143 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:53:52.017156 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:53:52.017171 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:53:52.017184 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:53:52.017197 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:53:52.017210 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:53:52.017223 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:53:52.017235 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:53:52.017248 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:53:52.017261 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:53:52.017284 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:53:52.017300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:53:52.017312 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:53:52.017325 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:53:52.017338 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:53:52.017351 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:53:52.017364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:53:52.017377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:53:52.017389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:53:52.017402 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:53:52.017418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:53:52.017431 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:53:52.017444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:53:52.017456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:53:52.017470 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:53:52.017483 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:53:52.017496 kernel: Zone ranges: Feb 8 23:53:52.017509 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:53:52.017522 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:53:52.017537 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:53:52.017551 kernel: Movable zone start for each node Feb 8 23:53:52.017564 kernel: Early memory node ranges Feb 8 23:53:52.017577 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:53:52.017589 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:53:52.017602 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:53:52.017615 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:53:52.017628 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:53:52.017640 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:53:52.017655 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:53:52.017668 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:53:52.017681 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:53:52.017694 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:53:52.017707 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:53:52.017720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:53:52.017733 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:53:52.017745 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:53:52.017758 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:53:52.017774 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:53:52.017787 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:53:52.017800 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:53:52.017813 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:53:52.017826 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:53:52.017839 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:53:52.017851 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:53:52.017863 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:53:52.017877 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:53:52.017892 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:53:52.017905 kernel: Policy zone: Normal Feb 8 23:53:52.017919 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:53:52.017933 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:53:52.017945 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:53:52.017958 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:53:52.017971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:53:52.017984 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:53:52.018000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:53:52.018013 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:53:52.018035 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:53:52.018051 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:53:52.018066 kernel: rcu: RCU event tracing is enabled. Feb 8 23:53:52.018079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:53:52.018093 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:53:52.018107 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:53:52.018120 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:53:52.018134 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:53:52.018148 kernel: Using NULL legacy PIC Feb 8 23:53:52.018164 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:53:52.018177 kernel: Console: colour dummy device 80x25 Feb 8 23:53:52.018190 kernel: printk: console [tty1] enabled Feb 8 23:53:52.018204 kernel: printk: console [ttyS0] enabled Feb 8 23:53:52.018218 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:53:52.018234 kernel: ACPI: Core revision 20210730 Feb 8 23:53:52.018248 kernel: Failed to register legacy timer interrupt Feb 8 23:53:52.018261 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:53:52.018282 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:53:52.018296 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 8 23:53:52.018310 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:53:52.018323 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:53:52.018337 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:53:52.018350 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:53:52.018363 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:53:52.018380 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:53:52.018393 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:53:52.018406 kernel: RETBleed: Vulnerable Feb 8 23:53:52.018419 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:53:52.018433 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:53:52.018446 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:53:52.018460 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:53:52.018473 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:53:52.018486 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:53:52.018500 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:53:52.018516 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:53:52.018530 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:53:52.018543 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:53:52.018557 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:53:52.018570 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:53:52.018583 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:53:52.018597 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:53:52.018610 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:53:52.018623 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:53:52.018637 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:53:52.018651 kernel: LSM: Security Framework initializing Feb 8 23:53:52.018664 kernel: SELinux: Initializing. Feb 8 23:53:52.018680 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:53:52.018694 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:53:52.018708 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:53:52.018721 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:53:52.018735 kernel: signal: max sigframe size: 3632 Feb 8 23:53:52.018749 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:53:52.018762 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:53:52.018776 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:53:52.018790 kernel: x86: Booting SMP configuration: Feb 8 23:53:52.018803 kernel: .... node #0, CPUs: #1 Feb 8 23:53:52.018820 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:53:52.018834 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:53:52.018848 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:53:52.018861 kernel: smpboot: Max logical packages: 1 Feb 8 23:53:52.018875 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:53:52.018889 kernel: devtmpfs: initialized Feb 8 23:53:52.018903 kernel: x86/mm: Memory block size: 128MB Feb 8 23:53:52.018916 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:53:52.018932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:53:52.018946 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:53:52.018960 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:53:52.018974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:53:52.018987 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:53:52.019001 kernel: audit: type=2000 audit(1707436430.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:53:52.019014 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:53:52.019028 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:53:52.019041 kernel: cpuidle: using governor menu Feb 8 23:53:52.019057 kernel: ACPI: bus type PCI registered Feb 8 23:53:52.019071 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:53:52.019084 kernel: dca service started, version 1.12.1 Feb 8 23:53:52.019098 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:53:52.019111 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:53:52.019125 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:53:52.019138 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:53:52.019152 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:53:52.019165 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:53:52.019181 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:53:52.019194 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:53:52.019208 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:53:52.019221 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:53:52.019235 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:53:52.019248 kernel: ACPI: Interpreter enabled Feb 8 23:53:52.019262 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:53:52.019291 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:53:52.019305 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:53:52.019322 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:53:52.019335 kernel: iommu: Default domain type: Translated Feb 8 23:53:52.019349 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:53:52.019363 kernel: vgaarb: loaded Feb 8 23:53:52.019377 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:53:52.019391 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:53:52.019404 kernel: PTP clock support registered Feb 8 23:53:52.019418 kernel: Registered efivars operations Feb 8 23:53:52.019431 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:53:52.019444 kernel: PCI: System does not support PCI Feb 8 23:53:52.019460 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:53:52.019473 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:53:52.019487 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:53:52.019501 kernel: pnp: PnP ACPI init Feb 8 23:53:52.019514 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:53:52.019528 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:53:52.019542 kernel: NET: Registered PF_INET protocol family Feb 8 23:53:52.019555 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:53:52.019571 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:53:52.019585 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:53:52.019598 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:53:52.019612 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:53:52.019626 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:53:52.019639 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:53:52.019653 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:53:52.019666 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:53:52.019679 kernel: NET: Registered PF_XDP protocol family Feb 8 23:53:52.019705 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:53:52.019718 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:53:52.019732 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:53:52.019745 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:53:52.019758 kernel: Initialise system trusted keyrings Feb 8 23:53:52.019771 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:53:52.019784 kernel: Key type asymmetric registered Feb 8 23:53:52.019797 kernel: Asymmetric key parser 'x509' registered Feb 8 23:53:52.019810 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:53:52.019825 kernel: io scheduler mq-deadline registered Feb 8 23:53:52.019839 kernel: io scheduler kyber registered Feb 8 23:53:52.019852 kernel: io scheduler bfq registered Feb 8 23:53:52.019865 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:53:52.019878 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:53:52.019891 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:53:52.019904 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:53:52.019918 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:53:52.020066 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:53:52.020181 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:53:51 UTC (1707436431) Feb 8 23:53:52.020309 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:53:52.020326 kernel: fail to initialize ptp_kvm Feb 8 23:53:52.020340 kernel: intel_pstate: CPU model not supported Feb 8 23:53:52.020354 kernel: efifb: probing for efifb Feb 8 23:53:52.020367 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:53:52.020381 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:53:52.020394 kernel: efifb: scrolling: redraw Feb 8 23:53:52.020411 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:53:52.020424 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:53:52.020437 kernel: fb0: EFI VGA frame buffer device Feb 8 23:53:52.020450 kernel: pstore: Registered efi as persistent store backend Feb 8 23:53:52.020463 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:53:52.020476 kernel: Segment Routing with IPv6 Feb 8 23:53:52.020490 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:53:52.020503 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:53:52.020517 kernel: Key type dns_resolver registered Feb 8 23:53:52.020532 kernel: IPI shorthand broadcast: enabled Feb 8 23:53:52.020546 kernel: sched_clock: Marking stable (776807600, 21805900)->(993642200, -195028700) Feb 8 23:53:52.020559 kernel: registered taskstats version 1 Feb 8 23:53:52.020573 kernel: Loading compiled-in X.509 certificates Feb 8 23:53:52.020586 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:53:52.020598 kernel: Key type .fscrypt registered Feb 8 23:53:52.020611 kernel: Key type fscrypt-provisioning registered Feb 8 23:53:52.020624 kernel: pstore: Using crash dump compression: deflate Feb 8 23:53:52.020640 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:53:52.020654 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:53:52.020667 kernel: ima: No architecture policies found Feb 8 23:53:52.020680 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:53:52.020693 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:53:52.020706 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:53:52.020720 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:53:52.020733 kernel: Run /init as init process Feb 8 23:53:52.020746 kernel: with arguments: Feb 8 23:53:52.020759 kernel: /init Feb 8 23:53:52.020775 kernel: with environment: Feb 8 23:53:52.020788 kernel: HOME=/ Feb 8 23:53:52.020801 kernel: TERM=linux Feb 8 23:53:52.020813 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:53:52.020829 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:53:52.020846 systemd[1]: Detected virtualization microsoft. Feb 8 23:53:52.020860 systemd[1]: Detected architecture x86-64. Feb 8 23:53:52.020876 systemd[1]: Running in initrd. Feb 8 23:53:52.020890 systemd[1]: No hostname configured, using default hostname. Feb 8 23:53:52.020903 systemd[1]: Hostname set to . Feb 8 23:53:52.020918 systemd[1]: Initializing machine ID from random generator. Feb 8 23:53:52.020933 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:53:52.020947 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:53:52.020960 systemd[1]: Reached target cryptsetup.target. Feb 8 23:53:52.020975 systemd[1]: Reached target paths.target. Feb 8 23:53:52.020989 systemd[1]: Reached target slices.target. Feb 8 23:53:52.021005 systemd[1]: Reached target swap.target. Feb 8 23:53:52.021019 systemd[1]: Reached target timers.target. Feb 8 23:53:52.021034 systemd[1]: Listening on iscsid.socket. Feb 8 23:53:52.021048 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:53:52.021062 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:53:52.021076 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:53:52.021090 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:53:52.021106 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:53:52.021120 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:53:52.021134 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:53:52.021148 systemd[1]: Reached target sockets.target. Feb 8 23:53:52.021162 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:53:52.021176 systemd[1]: Finished network-cleanup.service. Feb 8 23:53:52.021190 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:53:52.021204 systemd[1]: Starting systemd-journald.service... Feb 8 23:53:52.021218 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:53:52.021235 systemd[1]: Starting systemd-resolved.service... Feb 8 23:53:52.021249 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:53:52.021266 systemd-journald[183]: Journal started Feb 8 23:53:52.021465 systemd-journald[183]: Runtime Journal (/run/log/journal/78896b369a87428393e276688366b11c) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:53:52.032329 systemd[1]: Started systemd-journald.service. Feb 8 23:53:52.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.041849 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:53:52.050840 kernel: audit: type=1130 audit(1707436432.037:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.041859 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:53:52.041895 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:53:52.044500 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:53:52.045723 systemd[1]: Started systemd-resolved.service. Feb 8 23:53:52.045741 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:53:52.073311 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:53:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.092294 kernel: audit: type=1130 audit(1707436432.072:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.094227 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:53:52.135937 kernel: audit: type=1130 audit(1707436432.093:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.135962 kernel: audit: type=1130 audit(1707436432.096:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.135971 kernel: audit: type=1130 audit(1707436432.100:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.096482 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:53:52.149444 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:53:52.111424 systemd[1]: Reached target nss-lookup.target. Feb 8 23:53:52.151481 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:53:52.156009 kernel: Bridge firewalling registered Feb 8 23:53:52.154470 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:53:52.161090 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:53:52.174559 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:53:52.193881 kernel: audit: type=1130 audit(1707436432.179:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.193023 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:53:52.201938 kernel: SCSI subsystem initialized Feb 8 23:53:52.201220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:53:52.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.224655 dracut-cmdline[201]: dracut-dracut-053 Feb 8 23:53:52.224655 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:53:52.239316 kernel: audit: type=1130 audit(1707436432.203:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.266145 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:53:52.266224 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:53:52.266240 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:53:52.275893 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:53:52.278901 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:53:52.303876 kernel: audit: type=1130 audit(1707436432.280:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.303924 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:53:52.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.282007 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:53:52.308489 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:53:52.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.326295 kernel: audit: type=1130 audit(1707436432.313:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.332326 kernel: iscsi: registered transport (tcp) Feb 8 23:53:52.357972 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:53:52.358055 kernel: QLogic iSCSI HBA Driver Feb 8 23:53:52.386835 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:53:52.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.392298 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:53:52.441295 kernel: raid6: avx512x4 gen() 18479 MB/s Feb 8 23:53:52.461286 kernel: raid6: avx512x4 xor() 7429 MB/s Feb 8 23:53:52.481284 kernel: raid6: avx512x2 gen() 18489 MB/s Feb 8 23:53:52.501290 kernel: raid6: avx512x2 xor() 30264 MB/s Feb 8 23:53:52.521284 kernel: raid6: avx512x1 gen() 18449 MB/s Feb 8 23:53:52.541283 kernel: raid6: avx512x1 xor() 27152 MB/s Feb 8 23:53:52.561288 kernel: raid6: avx2x4 gen() 18480 MB/s Feb 8 23:53:52.581283 kernel: raid6: avx2x4 xor() 6942 MB/s Feb 8 23:53:52.601282 kernel: raid6: avx2x2 gen() 18360 MB/s Feb 8 23:53:52.622285 kernel: raid6: avx2x2 xor() 22273 MB/s Feb 8 23:53:52.642282 kernel: raid6: avx2x1 gen() 14102 MB/s Feb 8 23:53:52.663282 kernel: raid6: avx2x1 xor() 19395 MB/s Feb 8 23:53:52.683284 kernel: raid6: sse2x4 gen() 11720 MB/s Feb 8 23:53:52.703283 kernel: raid6: sse2x4 xor() 5960 MB/s Feb 8 23:53:52.723282 kernel: raid6: sse2x2 gen() 12884 MB/s Feb 8 23:53:52.744283 kernel: raid6: sse2x2 xor() 7525 MB/s Feb 8 23:53:52.764287 kernel: raid6: sse2x1 gen() 11613 MB/s Feb 8 23:53:52.787889 kernel: raid6: sse2x1 xor() 5945 MB/s Feb 8 23:53:52.787907 kernel: raid6: using algorithm avx512x2 gen() 18489 MB/s Feb 8 23:53:52.787918 kernel: raid6: .... xor() 30264 MB/s, rmw enabled Feb 8 23:53:52.791318 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:53:52.810294 kernel: xor: automatically using best checksumming function avx Feb 8 23:53:52.907303 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:53:52.915322 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:53:52.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.919000 audit: BPF prog-id=7 op=LOAD Feb 8 23:53:52.919000 audit: BPF prog-id=8 op=LOAD Feb 8 23:53:52.919893 systemd[1]: Starting systemd-udevd.service... Feb 8 23:53:52.934680 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:53:52.941442 systemd[1]: Started systemd-udevd.service. Feb 8 23:53:52.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:52.944850 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:53:52.964831 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Feb 8 23:53:52.995453 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:53:52.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:53.000439 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:53:53.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:53.036725 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:53:53.087300 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:53:53.097538 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:53:53.110297 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:53:53.119657 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:53:53.119704 kernel: AES CTR mode by8 optimization enabled Feb 8 23:53:53.122516 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:53:53.154293 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:53:53.160408 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:53:53.165297 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:53:53.165328 kernel: scsi host1: storvsc_host_t Feb 8 23:53:53.171099 kernel: scsi host0: storvsc_host_t Feb 8 23:53:53.177299 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:53:53.191052 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:53:53.191098 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:53:53.191136 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:53:53.201321 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:53:53.228155 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:53:53.228442 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:53:53.230419 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:53:53.230593 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:53:53.237903 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:53:53.238088 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:53:53.246895 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:53:53.247119 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:53:53.252287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:53:53.256993 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:53:53.341301 kernel: hv_netvsc 000d3a64-f0cc-000d-3a64-f0cc000d3a64 eth0: VF slot 1 added Feb 8 23:53:53.351295 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:53:53.359292 kernel: hv_pci c31fc190-e7f9-4058-8269-f83ddc17c64e: PCI VMBus probing: Using version 0x10004 Feb 8 23:53:53.359468 kernel: hv_pci c31fc190-e7f9-4058-8269-f83ddc17c64e: PCI host bridge to bus e7f9:00 Feb 8 23:53:53.368403 kernel: pci_bus e7f9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:53:53.368555 kernel: pci_bus e7f9:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:53:53.378322 kernel: pci e7f9:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:53:53.387151 kernel: pci e7f9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:53:53.403396 kernel: pci e7f9:00:02.0: enabling Extended Tags Feb 8 23:53:53.421140 kernel: pci e7f9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e7f9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:53:53.421356 kernel: pci_bus e7f9:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:53:53.421476 kernel: pci e7f9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:53:53.518300 kernel: mlx5_core e7f9:00:02.0: firmware version: 14.30.1224 Feb 8 23:53:53.669454 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:53:53.685295 kernel: mlx5_core e7f9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:53:53.738295 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (437) Feb 8 23:53:53.751649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:53:53.832987 kernel: mlx5_core e7f9:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:53:53.833233 kernel: mlx5_core e7f9:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Feb 8 23:53:53.846364 kernel: hv_netvsc 000d3a64-f0cc-000d-3a64-f0cc000d3a64 eth0: VF registering: eth1 Feb 8 23:53:53.846577 kernel: mlx5_core e7f9:00:02.0 eth1: joined to eth0 Feb 8 23:53:53.849805 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:53:53.866296 kernel: mlx5_core e7f9:00:02.0 enP59385s1: renamed from eth1 Feb 8 23:53:53.905925 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:53:53.911346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:53:53.922500 systemd[1]: Starting disk-uuid.service... Feb 8 23:53:53.934294 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:53:53.941287 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:53:54.950154 disk-uuid[564]: The operation has completed successfully. Feb 8 23:53:54.952733 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:53:55.014706 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:53:55.014822 systemd[1]: Finished disk-uuid.service. Feb 8 23:53:55.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.030226 systemd[1]: Starting verity-setup.service... Feb 8 23:53:55.102309 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:53:55.379020 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:53:55.385343 systemd[1]: Finished verity-setup.service. Feb 8 23:53:55.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.390073 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:53:55.465303 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:53:55.465729 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:53:55.469654 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:53:55.473864 systemd[1]: Starting ignition-setup.service... Feb 8 23:53:55.478846 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:53:55.500573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:53:55.500628 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:53:55.500647 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:53:55.546884 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:53:55.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.553000 audit: BPF prog-id=9 op=LOAD Feb 8 23:53:55.554386 systemd[1]: Starting systemd-networkd.service... Feb 8 23:53:55.580001 systemd-networkd[805]: lo: Link UP Feb 8 23:53:55.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.580012 systemd-networkd[805]: lo: Gained carrier Feb 8 23:53:55.580899 systemd-networkd[805]: Enumeration completed Feb 8 23:53:55.580970 systemd[1]: Started systemd-networkd.service. Feb 8 23:53:55.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.584519 systemd[1]: Reached target network.target. Feb 8 23:53:55.585665 systemd-networkd[805]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:53:55.588124 systemd[1]: Starting iscsiuio.service... Feb 8 23:53:55.611250 iscsid[814]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:53:55.611250 iscsid[814]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:53:55.611250 iscsid[814]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:53:55.611250 iscsid[814]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:53:55.611250 iscsid[814]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:53:55.611250 iscsid[814]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:53:55.611250 iscsid[814]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:53:55.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.595286 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:53:55.598907 systemd[1]: Started iscsiuio.service. Feb 8 23:53:55.603842 systemd[1]: Starting iscsid.service... Feb 8 23:53:55.611669 systemd[1]: Started iscsid.service. Feb 8 23:53:55.623443 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:53:55.663136 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:53:55.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.670059 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:53:55.674919 kernel: mlx5_core e7f9:00:02.0 enP59385s1: Link up Feb 8 23:53:55.674910 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:53:55.676943 systemd[1]: Reached target remote-fs.target. Feb 8 23:53:55.679779 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:53:55.690441 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:53:55.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.759785 systemd[1]: Finished ignition-setup.service. Feb 8 23:53:55.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:55.763135 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:53:55.776294 kernel: hv_netvsc 000d3a64-f0cc-000d-3a64-f0cc000d3a64 eth0: Data path switched to VF: enP59385s1 Feb 8 23:53:55.776818 systemd-networkd[805]: enP59385s1: Link UP Feb 8 23:53:55.783292 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:53:55.776943 systemd-networkd[805]: eth0: Link UP Feb 8 23:53:55.783374 systemd-networkd[805]: eth0: Gained carrier Feb 8 23:53:55.790720 systemd-networkd[805]: enP59385s1: Gained carrier Feb 8 23:53:55.824373 systemd-networkd[805]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:53:57.487548 systemd-networkd[805]: eth0: Gained IPv6LL Feb 8 23:53:58.974252 ignition[829]: Ignition 2.14.0 Feb 8 23:53:58.974267 ignition[829]: Stage: fetch-offline Feb 8 23:53:58.974385 ignition[829]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:53:58.974446 ignition[829]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:53:59.050078 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:53:59.053322 ignition[829]: parsed url from cmdline: "" Feb 8 23:53:59.053329 ignition[829]: no config URL provided Feb 8 23:53:59.053340 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:53:59.053354 ignition[829]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:53:59.053369 ignition[829]: failed to fetch config: resource requires networking Feb 8 23:53:59.055295 ignition[829]: Ignition finished successfully Feb 8 23:53:59.064550 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:53:59.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.067933 systemd[1]: Starting ignition-fetch.service... Feb 8 23:53:59.088576 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:53:59.088608 kernel: audit: type=1130 audit(1707436439.066:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.081672 ignition[835]: Ignition 2.14.0 Feb 8 23:53:59.081680 ignition[835]: Stage: fetch Feb 8 23:53:59.081787 ignition[835]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:53:59.081812 ignition[835]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:53:59.088814 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:53:59.089016 ignition[835]: parsed url from cmdline: "" Feb 8 23:53:59.089022 ignition[835]: no config URL provided Feb 8 23:53:59.089030 ignition[835]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:53:59.089041 ignition[835]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:53:59.089089 ignition[835]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:53:59.180696 ignition[835]: GET result: OK Feb 8 23:53:59.180757 ignition[835]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:53:59.307508 ignition[835]: opening config device: "/dev/sr0" Feb 8 23:53:59.307855 ignition[835]: getting drive status for "/dev/sr0" Feb 8 23:53:59.307987 ignition[835]: drive status: OK Feb 8 23:53:59.308053 ignition[835]: mounting config device Feb 8 23:53:59.308082 ignition[835]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure2561266144" Feb 8 23:53:59.334883 ignition[835]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure2561266144" Feb 8 23:53:59.338379 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:53:59.337049 systemd[1]: tmp-ignition\x2dazure2561266144.mount: Deactivated successfully. Feb 8 23:53:59.335876 ignition[835]: checking for config drive Feb 8 23:53:59.336295 ignition[835]: reading config Feb 8 23:53:59.336682 ignition[835]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure2561266144" Feb 8 23:53:59.338224 ignition[835]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure2561266144" Feb 8 23:53:59.338244 ignition[835]: config has been read from custom data Feb 8 23:53:59.338346 ignition[835]: parsing config with SHA512: eda0480e7a8723d9b51d9dcbdf6c678448d6a24e88794ed788c0d7c68e35ae424971cce47ad10ccd8ec5e128a8d00e9dbf674680fe4c7aa6312b8234d13ce8db Feb 8 23:53:59.374267 unknown[835]: fetched base config from "system" Feb 8 23:53:59.376906 unknown[835]: fetched base config from "system" Feb 8 23:53:59.376921 unknown[835]: fetched user config from "azure" Feb 8 23:53:59.381479 ignition[835]: fetch: fetch complete Feb 8 23:53:59.381490 ignition[835]: fetch: fetch passed Feb 8 23:53:59.381546 ignition[835]: Ignition finished successfully Feb 8 23:53:59.387457 systemd[1]: Finished ignition-fetch.service. Feb 8 23:53:59.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.404294 kernel: audit: type=1130 audit(1707436439.389:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.402616 systemd[1]: Starting ignition-kargs.service... Feb 8 23:53:59.414333 ignition[843]: Ignition 2.14.0 Feb 8 23:53:59.414345 ignition[843]: Stage: kargs Feb 8 23:53:59.414475 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:53:59.414507 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:53:59.418438 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:53:59.422975 ignition[843]: kargs: kargs passed Feb 8 23:53:59.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.424199 systemd[1]: Finished ignition-kargs.service. Feb 8 23:53:59.439571 kernel: audit: type=1130 audit(1707436439.427:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.423033 ignition[843]: Ignition finished successfully Feb 8 23:53:59.441803 systemd[1]: Starting ignition-disks.service... Feb 8 23:53:59.445656 ignition[849]: Ignition 2.14.0 Feb 8 23:53:59.445681 ignition[849]: Stage: disks Feb 8 23:53:59.445853 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:53:59.445892 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:53:59.456488 systemd[1]: Finished ignition-disks.service. Feb 8 23:53:59.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.451462 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:53:59.478474 kernel: audit: type=1130 audit(1707436439.458:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.458848 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:53:59.455247 ignition[849]: disks: disks passed Feb 8 23:53:59.473813 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:53:59.455299 ignition[849]: Ignition finished successfully Feb 8 23:53:59.478481 systemd[1]: Reached target local-fs.target. Feb 8 23:53:59.482301 systemd[1]: Reached target sysinit.target. Feb 8 23:53:59.485427 systemd[1]: Reached target basic.target. Feb 8 23:53:59.496310 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:53:59.552071 systemd-fsck[857]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:53:59.557381 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:53:59.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.571513 systemd[1]: Mounting sysroot.mount... Feb 8 23:53:59.576692 kernel: audit: type=1130 audit(1707436439.559:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:53:59.592476 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:53:59.593039 systemd[1]: Mounted sysroot.mount. Feb 8 23:53:59.595120 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:53:59.659592 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:53:59.665877 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:53:59.671739 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:53:59.672660 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:53:59.681109 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:53:59.745123 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:53:59.756990 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:53:59.766335 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (867) Feb 8 23:53:59.775634 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:53:59.775675 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:53:59.775690 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:53:59.783516 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:53:59.787714 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:53:59.802774 initrd-setup-root[898]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:53:59.809344 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:53:59.833213 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:54:00.355842 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:54:00.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.361608 systemd[1]: Starting ignition-mount.service... Feb 8 23:54:00.374368 kernel: audit: type=1130 audit(1707436440.360:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.377781 systemd[1]: Starting sysroot-boot.service... Feb 8 23:54:00.382645 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:54:00.385016 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:54:00.402509 systemd[1]: Finished sysroot-boot.service. Feb 8 23:54:00.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.419599 kernel: audit: type=1130 audit(1707436440.404:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.428631 ignition[936]: INFO : Ignition 2.14.0 Feb 8 23:54:00.428631 ignition[936]: INFO : Stage: mount Feb 8 23:54:00.432828 ignition[936]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:54:00.432828 ignition[936]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:54:00.432828 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:54:00.447039 ignition[936]: INFO : mount: mount passed Feb 8 23:54:00.447039 ignition[936]: INFO : Ignition finished successfully Feb 8 23:54:00.462737 kernel: audit: type=1130 audit(1707436440.446:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:00.441940 systemd[1]: Finished ignition-mount.service. Feb 8 23:54:00.980077 coreos-metadata[866]: Feb 08 23:54:00.979 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:54:00.996767 coreos-metadata[866]: Feb 08 23:54:00.996 INFO Fetch successful Feb 8 23:54:01.031068 coreos-metadata[866]: Feb 08 23:54:01.030 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:54:01.046732 coreos-metadata[866]: Feb 08 23:54:01.046 INFO Fetch successful Feb 8 23:54:01.064134 coreos-metadata[866]: Feb 08 23:54:01.064 INFO wrote hostname ci-3510.3.2-a-b1d3c6d57d to /sysroot/etc/hostname Feb 8 23:54:01.069627 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:54:01.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:01.075157 systemd[1]: Starting ignition-files.service... Feb 8 23:54:01.087882 kernel: audit: type=1130 audit(1707436441.074:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:01.093635 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:54:01.104293 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945) Feb 8 23:54:01.113133 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:54:01.113164 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:54:01.113184 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:54:01.121187 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:54:01.134319 ignition[964]: INFO : Ignition 2.14.0 Feb 8 23:54:01.134319 ignition[964]: INFO : Stage: files Feb 8 23:54:01.138334 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:54:01.138334 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:54:01.147366 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:54:01.162171 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:54:01.165218 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:54:01.165218 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:54:01.242289 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:54:01.246307 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:54:01.275605 unknown[964]: wrote ssh authorized keys file for user: core Feb 8 23:54:01.279259 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:54:01.297503 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:54:01.303440 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:54:01.987996 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:54:02.123776 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:54:02.132033 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:54:02.132033 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:54:02.132033 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:54:02.509053 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:54:02.617490 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:54:02.622512 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:54:02.622512 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:54:03.125758 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:54:03.259753 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:54:03.267417 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:54:03.267417 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:54:03.276159 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:54:03.475737 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:54:03.708774 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:54:03.716401 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:54:03.716401 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:54:03.716401 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:54:03.831972 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:54:04.015167 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:54:04.022483 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:54:04.022483 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:54:04.022483 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:54:04.151809 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:54:04.609106 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:54:04.617016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:54:04.617016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:54:04.617016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:54:04.617016 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:54:04.632511 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:54:05.138160 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:54:05.228670 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:54:05.234048 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:54:06.221799 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:54:06.227608 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:54:06.227608 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:54:06.244637 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:54:06.244637 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3604721897" Feb 8 23:54:06.244637 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3604721897": device or resource busy Feb 8 23:54:06.244637 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3604721897", trying btrfs: device or resource busy Feb 8 23:54:06.244637 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3604721897" Feb 8 23:54:06.271766 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (967) Feb 8 23:54:06.271788 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3604721897" Feb 8 23:54:06.276698 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem3604721897" Feb 8 23:54:06.276698 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem3604721897" Feb 8 23:54:06.276698 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:54:06.276698 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:54:06.276698 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:54:06.273359 systemd[1]: mnt-oem3604721897.mount: Deactivated successfully. Feb 8 23:54:06.302855 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1264417393" Feb 8 23:54:06.302855 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1264417393": device or resource busy Feb 8 23:54:06.302855 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1264417393", trying btrfs: device or resource busy Feb 8 23:54:06.302855 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1264417393" Feb 8 23:54:06.302855 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1264417393" Feb 8 23:54:06.302855 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem1264417393" Feb 8 23:54:06.347929 kernel: audit: type=1130 audit(1707436446.315:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.300757 systemd[1]: mnt-oem1264417393.mount: Deactivated successfully. Feb 8 23:54:06.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.352254 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem1264417393" Feb 8 23:54:06.352254 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(20): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:54:06.352254 ignition[964]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:54:06.469495 kernel: audit: type=1130 audit(1707436446.352:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.469531 kernel: audit: type=1131 audit(1707436446.353:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.469545 kernel: audit: type=1130 audit(1707436446.428:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.311385 systemd[1]: Finished ignition-files.service. Feb 8 23:54:06.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(22): [started] setting preset to enabled for "waagent.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(22): [finished] setting preset to enabled for "waagent.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:54:06.473293 ignition[964]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:54:06.473293 ignition[964]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:54:06.473293 ignition[964]: INFO : files: files passed Feb 8 23:54:06.473293 ignition[964]: INFO : Ignition finished successfully Feb 8 23:54:06.559992 kernel: audit: type=1130 audit(1707436446.472:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.560032 kernel: audit: type=1131 audit(1707436446.472:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.560051 kernel: audit: type=1130 audit(1707436446.513:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.328762 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:54:06.566016 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:54:06.580764 kernel: audit: type=1131 audit(1707436446.565:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.332983 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:54:06.340389 systemd[1]: Starting ignition-quench.service... Feb 8 23:54:06.348170 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:54:06.348255 systemd[1]: Finished ignition-quench.service. Feb 8 23:54:06.425563 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:54:06.429038 systemd[1]: Reached target ignition-complete.target. Feb 8 23:54:06.447955 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:54:06.468243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:54:06.468344 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:54:06.473123 systemd[1]: Reached target initrd-fs.target. Feb 8 23:54:06.496452 systemd[1]: Reached target initrd.target. Feb 8 23:54:06.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.498150 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:54:06.645769 kernel: audit: type=1131 audit(1707436446.628:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.498872 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:54:06.511112 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:54:06.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.513898 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:54:06.665822 kernel: audit: type=1131 audit(1707436446.649:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.553123 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:54:06.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.555749 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:54:06.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.560151 systemd[1]: Stopped target timers.target. Feb 8 23:54:06.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.561988 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:54:06.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.694000 ignition[1002]: INFO : Ignition 2.14.0 Feb 8 23:54:06.694000 ignition[1002]: INFO : Stage: umount Feb 8 23:54:06.694000 ignition[1002]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:54:06.694000 ignition[1002]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:54:06.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.562126 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:54:06.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.712754 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:54:06.712754 ignition[1002]: INFO : umount: umount passed Feb 8 23:54:06.712754 ignition[1002]: INFO : Ignition finished successfully Feb 8 23:54:06.578478 systemd[1]: Stopped target initrd.target. Feb 8 23:54:06.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.580857 systemd[1]: Stopped target basic.target. Feb 8 23:54:06.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.582817 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:54:06.588987 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:54:06.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.592811 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:54:06.597109 systemd[1]: Stopped target remote-fs.target. Feb 8 23:54:06.600684 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:54:06.604750 systemd[1]: Stopped target sysinit.target. Feb 8 23:54:06.608625 systemd[1]: Stopped target local-fs.target. Feb 8 23:54:06.612380 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:54:06.616412 systemd[1]: Stopped target swap.target. Feb 8 23:54:06.624401 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:54:06.624545 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:54:06.639539 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:54:06.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.645857 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:54:06.645996 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:54:06.660965 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:54:06.661131 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:54:06.665978 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:54:06.666101 systemd[1]: Stopped ignition-files.service. Feb 8 23:54:06.669507 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:54:06.669642 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:54:06.674632 systemd[1]: Stopping ignition-mount.service... Feb 8 23:54:06.677298 systemd[1]: Stopping iscsiuio.service... Feb 8 23:54:06.679957 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:54:06.681919 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:54:06.682101 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:54:06.684568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:54:06.684710 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:54:06.689190 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:54:06.689333 systemd[1]: Stopped iscsiuio.service. Feb 8 23:54:06.706733 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:54:06.706811 systemd[1]: Stopped ignition-mount.service. Feb 8 23:54:06.709006 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:54:06.709098 systemd[1]: Stopped ignition-disks.service. Feb 8 23:54:06.712726 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:54:06.712875 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:54:06.726870 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:54:06.726996 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:54:06.730529 systemd[1]: Stopped target network.target. Feb 8 23:54:06.733974 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:54:06.734109 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:54:06.738134 systemd[1]: Stopped target paths.target. Feb 8 23:54:06.740217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:54:06.745337 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:54:06.748747 systemd[1]: Stopped target slices.target. Feb 8 23:54:06.752252 systemd[1]: Stopped target sockets.target. Feb 8 23:54:06.755764 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:54:06.755862 systemd[1]: Closed iscsid.socket. Feb 8 23:54:06.762851 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:54:06.762955 systemd[1]: Closed iscsiuio.socket. Feb 8 23:54:06.766786 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:54:06.766905 systemd[1]: Stopped ignition-setup.service. Feb 8 23:54:06.770912 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:54:06.774313 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:54:06.777325 systemd-networkd[805]: eth0: DHCPv6 lease lost Feb 8 23:54:06.779956 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:54:06.795016 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:54:06.798350 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:54:06.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.864213 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:54:06.864331 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:54:06.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.870635 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:54:06.870740 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:54:06.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.875534 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:54:06.877000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:54:06.877000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:54:06.875621 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:54:06.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.878889 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:54:06.878923 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:54:06.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.882298 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:54:06.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.882352 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:54:06.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.886689 systemd[1]: Stopping network-cleanup.service... Feb 8 23:54:06.890213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:54:06.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.890266 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:54:06.893963 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:54:06.894033 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:54:06.898446 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:54:06.898496 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:54:06.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.902686 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:54:06.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.906476 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:54:06.906601 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:54:06.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.913202 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:54:06.913260 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:54:06.918655 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:54:06.918695 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:54:06.923264 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:54:06.923339 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:54:06.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:06.928741 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:54:06.928787 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:54:06.930538 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:54:06.930577 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:54:06.934821 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:54:06.938352 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:54:06.938413 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:54:06.940969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:54:06.941018 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:54:06.943347 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:54:06.943401 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:54:06.953783 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:54:06.955606 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:54:07.001282 kernel: hv_netvsc 000d3a64-f0cc-000d-3a64-f0cc000d3a64 eth0: Data path switched from VF: enP59385s1 Feb 8 23:54:07.021014 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:54:07.023501 systemd[1]: Stopped network-cleanup.service. Feb 8 23:54:07.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:07.027268 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:54:07.032211 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:54:07.043133 systemd[1]: Switching root. Feb 8 23:54:07.071247 iscsid[814]: iscsid shutting down. Feb 8 23:54:07.072971 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 8 23:54:07.073032 systemd-journald[183]: Journal stopped Feb 8 23:54:20.236018 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:54:20.236053 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:54:20.236065 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:54:20.236073 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:54:20.236081 kernel: SELinux: policy capability open_perms=1 Feb 8 23:54:20.236089 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:54:20.236102 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:54:20.236112 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:54:20.236124 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:54:20.236132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:54:20.236143 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:54:20.236154 systemd[1]: Successfully loaded SELinux policy in 292.584ms. Feb 8 23:54:20.236167 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.601ms. Feb 8 23:54:20.236179 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:54:20.236193 systemd[1]: Detected virtualization microsoft. Feb 8 23:54:20.236206 systemd[1]: Detected architecture x86-64. Feb 8 23:54:20.236214 systemd[1]: Detected first boot. Feb 8 23:54:20.236228 systemd[1]: Hostname set to . Feb 8 23:54:20.236238 systemd[1]: Initializing machine ID from random generator. Feb 8 23:54:20.236253 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:54:20.236290 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:54:20.236303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:54:20.236314 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:54:20.236328 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:54:20.236338 kernel: kauditd_printk_skb: 50 callbacks suppressed Feb 8 23:54:20.236349 kernel: audit: type=1334 audit(1707436459.693:91): prog-id=12 op=LOAD Feb 8 23:54:20.236362 kernel: audit: type=1334 audit(1707436459.693:92): prog-id=3 op=UNLOAD Feb 8 23:54:20.236372 kernel: audit: type=1334 audit(1707436459.698:93): prog-id=13 op=LOAD Feb 8 23:54:20.236383 kernel: audit: type=1334 audit(1707436459.703:94): prog-id=14 op=LOAD Feb 8 23:54:20.236395 kernel: audit: type=1334 audit(1707436459.703:95): prog-id=4 op=UNLOAD Feb 8 23:54:20.236404 kernel: audit: type=1334 audit(1707436459.703:96): prog-id=5 op=UNLOAD Feb 8 23:54:20.236416 kernel: audit: type=1334 audit(1707436459.708:97): prog-id=15 op=LOAD Feb 8 23:54:20.236426 kernel: audit: type=1334 audit(1707436459.708:98): prog-id=12 op=UNLOAD Feb 8 23:54:20.236436 kernel: audit: type=1334 audit(1707436459.727:99): prog-id=16 op=LOAD Feb 8 23:54:20.236447 kernel: audit: type=1334 audit(1707436459.733:100): prog-id=17 op=LOAD Feb 8 23:54:20.236459 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:54:20.236470 systemd[1]: Stopped iscsid.service. Feb 8 23:54:20.236483 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:54:20.236492 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:54:20.236504 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:54:20.236518 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:54:20.236533 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:54:20.236543 systemd[1]: Created slice system-getty.slice. Feb 8 23:54:20.236555 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:54:20.236565 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:54:20.236578 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:54:20.236587 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:54:20.236600 systemd[1]: Created slice user.slice. Feb 8 23:54:20.236609 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:54:20.236622 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:54:20.236634 systemd[1]: Set up automount boot.automount. Feb 8 23:54:20.236646 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:54:20.236656 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:54:20.236669 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:54:20.236679 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:54:20.236692 systemd[1]: Reached target integritysetup.target. Feb 8 23:54:20.236701 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:54:20.236713 systemd[1]: Reached target remote-fs.target. Feb 8 23:54:20.236725 systemd[1]: Reached target slices.target. Feb 8 23:54:20.236738 systemd[1]: Reached target swap.target. Feb 8 23:54:20.236747 systemd[1]: Reached target torcx.target. Feb 8 23:54:20.236761 systemd[1]: Reached target veritysetup.target. Feb 8 23:54:20.236771 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:54:20.236784 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:54:20.236797 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:54:20.236809 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:54:20.236819 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:54:20.236832 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:54:20.236844 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:54:20.236855 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:54:20.236867 systemd[1]: Mounting media.mount... Feb 8 23:54:20.236880 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:54:20.236894 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:54:20.236907 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:54:20.236918 systemd[1]: Mounting tmp.mount... Feb 8 23:54:20.236930 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:54:20.236940 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:54:20.236953 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:54:20.236963 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:54:20.236976 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:54:20.236989 systemd[1]: Starting modprobe@drm.service... Feb 8 23:54:20.237001 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:54:20.237013 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:54:20.237024 systemd[1]: Starting modprobe@loop.service... Feb 8 23:54:20.237035 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:54:20.237047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:54:20.237057 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:54:20.237069 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:54:20.237079 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:54:20.237093 systemd[1]: Stopped systemd-journald.service. Feb 8 23:54:20.237104 systemd[1]: Starting systemd-journald.service... Feb 8 23:54:20.237116 kernel: loop: module loaded Feb 8 23:54:20.237125 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:54:20.237137 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:54:20.237147 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:54:20.237160 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:54:20.237170 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:54:20.237182 systemd[1]: Stopped verity-setup.service. Feb 8 23:54:20.237195 kernel: fuse: init (API version 7.34) Feb 8 23:54:20.237207 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:54:20.237217 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:54:20.237229 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:54:20.237239 systemd[1]: Mounted media.mount. Feb 8 23:54:20.237253 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:54:20.237266 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:54:20.237285 systemd[1]: Mounted tmp.mount. Feb 8 23:54:20.237299 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:54:20.237313 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:54:20.237326 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:54:20.237336 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:54:20.237349 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:54:20.237359 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:54:20.237372 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:54:20.237381 systemd[1]: Finished modprobe@drm.service. Feb 8 23:54:20.237399 systemd-journald[1144]: Journal started Feb 8 23:54:20.237454 systemd-journald[1144]: Runtime Journal (/run/log/journal/6ef0b45f1eb94cf1b5eae5fecc877426) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:54:09.215000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:54:09.827000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:54:09.864000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:54:09.864000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:54:09.864000 audit: BPF prog-id=10 op=LOAD Feb 8 23:54:09.864000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:54:09.864000 audit: BPF prog-id=11 op=LOAD Feb 8 23:54:09.864000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:54:11.224000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:54:11.224000 audit[1035]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:11.224000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:54:11.231000 audit[1035]: AVC avc: denied { associate } for pid=1035 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:54:11.231000 audit[1035]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1018 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:11.231000 audit: CWD cwd="/" Feb 8 23:54:11.231000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:11.231000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:11.231000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:54:19.693000 audit: BPF prog-id=12 op=LOAD Feb 8 23:54:19.693000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:54:19.698000 audit: BPF prog-id=13 op=LOAD Feb 8 23:54:19.703000 audit: BPF prog-id=14 op=LOAD Feb 8 23:54:19.703000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:54:19.703000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:54:19.708000 audit: BPF prog-id=15 op=LOAD Feb 8 23:54:19.708000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:54:19.727000 audit: BPF prog-id=16 op=LOAD Feb 8 23:54:19.733000 audit: BPF prog-id=17 op=LOAD Feb 8 23:54:19.733000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:54:19.733000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:54:19.738000 audit: BPF prog-id=18 op=LOAD Feb 8 23:54:19.738000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:54:19.743000 audit: BPF prog-id=19 op=LOAD Feb 8 23:54:19.747000 audit: BPF prog-id=20 op=LOAD Feb 8 23:54:19.747000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:54:19.747000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:54:19.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:19.758000 audit: BPF prog-id=18 op=UNLOAD Feb 8 23:54:19.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:19.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:19.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.093000 audit: BPF prog-id=21 op=LOAD Feb 8 23:54:20.093000 audit: BPF prog-id=22 op=LOAD Feb 8 23:54:20.093000 audit: BPF prog-id=23 op=LOAD Feb 8 23:54:20.094000 audit: BPF prog-id=19 op=UNLOAD Feb 8 23:54:20.094000 audit: BPF prog-id=20 op=UNLOAD Feb 8 23:54:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.232000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:54:20.232000 audit[1144]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7ffde1719340 a2=4000 a3=7ffde17193dc items=0 ppid=1 pid=1144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:20.232000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:54:20.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:19.692536 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:54:11.179553 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:54:19.748489 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:54:11.193760 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:54:11.193789 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:54:11.193834 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:54:11.193846 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:54:11.193907 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:54:11.193924 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:54:11.194159 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:54:11.194212 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:54:11.194230 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:54:11.208918 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:54:11.208965 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:54:11.208993 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:54:11.209010 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:54:11.209031 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:54:11.209052 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:54:18.456103 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:54:20.245536 systemd[1]: Started systemd-journald.service. Feb 8 23:54:20.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:18.456383 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:54:20.245395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:54:18.456488 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:54:18.456656 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:54:18.456701 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:54:18.456755 /usr/lib/systemd/system-generators/torcx-generator[1035]: time="2024-02-08T23:54:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:54:20.246443 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:54:20.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.249059 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:54:20.249310 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:54:20.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.251826 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:54:20.252048 systemd[1]: Finished modprobe@loop.service. Feb 8 23:54:20.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.254467 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:54:20.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.257248 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:54:20.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.260178 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:54:20.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.263167 systemd[1]: Reached target network-pre.target. Feb 8 23:54:20.267245 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:54:20.271867 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:54:20.279195 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:54:20.281763 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:54:20.286104 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:54:20.288453 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:54:20.290076 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:54:20.292629 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:54:20.294356 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:54:20.299217 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:54:20.304259 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:54:20.308765 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:54:20.318582 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:54:20.321243 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:54:20.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.334970 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:54:20.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.338220 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:54:20.352221 systemd-journald[1144]: Time spent on flushing to /var/log/journal/6ef0b45f1eb94cf1b5eae5fecc877426 is 26.615ms for 1215 entries. Feb 8 23:54:20.352221 systemd-journald[1144]: System Journal (/var/log/journal/6ef0b45f1eb94cf1b5eae5fecc877426) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:54:20.441534 systemd-journald[1144]: Received client request to flush runtime journal. Feb 8 23:54:20.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.392728 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:54:20.442663 udevadm[1158]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 8 23:54:20.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.442701 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:54:20.794710 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:54:20.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:20.800394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:54:21.067571 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:54:21.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:21.545511 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:54:21.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:21.548000 audit: BPF prog-id=24 op=LOAD Feb 8 23:54:21.548000 audit: BPF prog-id=25 op=LOAD Feb 8 23:54:21.548000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:54:21.548000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:54:21.549740 systemd[1]: Starting systemd-udevd.service... Feb 8 23:54:21.568406 systemd-udevd[1163]: Using default interface naming scheme 'v252'. Feb 8 23:54:21.885345 systemd[1]: Started systemd-udevd.service. Feb 8 23:54:21.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:21.889000 audit: BPF prog-id=26 op=LOAD Feb 8 23:54:21.892581 systemd[1]: Starting systemd-networkd.service... Feb 8 23:54:21.927638 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:54:21.981302 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:54:22.007000 audit[1175]: AVC avc: denied { confidentiality } for pid=1175 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:54:22.021000 audit: BPF prog-id=27 op=LOAD Feb 8 23:54:22.021000 audit: BPF prog-id=28 op=LOAD Feb 8 23:54:22.021000 audit: BPF prog-id=29 op=LOAD Feb 8 23:54:22.023061 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:54:22.054294 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:54:22.064291 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:54:22.076672 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:54:22.076752 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:54:22.084379 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:54:22.087778 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:54:22.087850 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:54:22.090964 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:54:22.104294 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:54:22.104394 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:54:22.104423 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:54:21.861019 systemd[1]: Started systemd-userdbd.service. Feb 8 23:54:21.925415 systemd-journald[1144]: Time jumped backwards, rotating. Feb 8 23:54:21.925544 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:54:21.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:22.007000 audit[1175]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564f9edb8040 a1=f884 a2=7f23aab4dbc5 a3=5 items=12 ppid=1163 pid=1175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:22.007000 audit: CWD cwd="/" Feb 8 23:54:22.007000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=1 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=2 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=3 name=(null) inode=15037 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=4 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=5 name=(null) inode=15038 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=6 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=7 name=(null) inode=15039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=8 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=9 name=(null) inode=15040 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=10 name=(null) inode=15036 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PATH item=11 name=(null) inode=15041 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:54:22.007000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:54:22.022489 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:54:22.133711 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1167) Feb 8 23:54:22.146527 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:54:22.731226 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:54:22.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:22.734912 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:54:22.815987 systemd-networkd[1169]: lo: Link UP Feb 8 23:54:22.815999 systemd-networkd[1169]: lo: Gained carrier Feb 8 23:54:22.816686 systemd-networkd[1169]: Enumeration completed Feb 8 23:54:22.816824 systemd[1]: Started systemd-networkd.service. Feb 8 23:54:22.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:22.820819 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:54:22.844821 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:54:22.877471 kernel: mlx5_core e7f9:00:02.0 enP59385s1: Link up Feb 8 23:54:22.919509 kernel: hv_netvsc 000d3a64-f0cc-000d-3a64-f0cc000d3a64 eth0: Data path switched to VF: enP59385s1 Feb 8 23:54:22.921213 systemd-networkd[1169]: enP59385s1: Link UP Feb 8 23:54:22.921480 systemd-networkd[1169]: eth0: Link UP Feb 8 23:54:22.921586 systemd-networkd[1169]: eth0: Gained carrier Feb 8 23:54:22.924756 systemd-networkd[1169]: enP59385s1: Gained carrier Feb 8 23:54:22.946566 systemd-networkd[1169]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:54:23.065810 lvm[1241]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:54:23.112643 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:54:23.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:23.115906 systemd[1]: Reached target cryptsetup.target. Feb 8 23:54:23.119711 systemd[1]: Starting lvm2-activation.service... Feb 8 23:54:23.123837 lvm[1243]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:54:23.150514 systemd[1]: Finished lvm2-activation.service. Feb 8 23:54:23.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:23.153276 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:54:23.155782 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:54:23.155820 systemd[1]: Reached target local-fs.target. Feb 8 23:54:23.157779 systemd[1]: Reached target machines.target. Feb 8 23:54:23.160880 systemd[1]: Starting ldconfig.service... Feb 8 23:54:23.163146 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:54:23.163256 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:54:23.164422 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:54:23.167714 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:54:23.171648 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:54:23.174262 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:54:23.174368 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:54:23.175409 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:54:23.207407 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:54:23.221736 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:54:23.223302 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:54:23.280535 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1245 (bootctl) Feb 8 23:54:23.282280 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:54:23.291404 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:54:23.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:23.954546 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:54:23.955271 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:54:23.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:23.983611 systemd-networkd[1169]: eth0: Gained IPv6LL Feb 8 23:54:23.989443 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:54:23.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:24.606481 systemd-fsck[1253]: fsck.fat 4.2 (2021-01-31) Feb 8 23:54:24.606481 systemd-fsck[1253]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:54:24.608729 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:54:24.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:24.614121 systemd[1]: Mounting boot.mount... Feb 8 23:54:24.617256 kernel: kauditd_printk_skb: 84 callbacks suppressed Feb 8 23:54:24.617323 kernel: audit: type=1130 audit(1707436464.610:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:24.633813 systemd[1]: Mounted boot.mount. Feb 8 23:54:24.648933 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:54:24.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:24.662509 kernel: audit: type=1130 audit(1707436464.649:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.294302 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:54:25.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.298761 systemd[1]: Starting audit-rules.service... Feb 8 23:54:25.309044 kernel: audit: type=1130 audit(1707436465.296:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.311099 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:54:25.314492 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:54:25.316000 audit: BPF prog-id=30 op=LOAD Feb 8 23:54:25.325864 kernel: audit: type=1334 audit(1707436465.316:171): prog-id=30 op=LOAD Feb 8 23:54:25.322653 systemd[1]: Starting systemd-resolved.service... Feb 8 23:54:25.328033 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:54:25.331429 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:54:25.325000 audit: BPF prog-id=31 op=LOAD Feb 8 23:54:25.336460 kernel: audit: type=1334 audit(1707436465.325:172): prog-id=31 op=LOAD Feb 8 23:54:25.605000 audit[1265]: SYSTEM_BOOT pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.607905 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:54:25.618480 kernel: audit: type=1127 audit(1707436465.605:173): pid=1265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.620342 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:54:25.634702 kernel: audit: type=1130 audit(1707436465.618:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.633868 systemd[1]: Reached target time-set.target. Feb 8 23:54:25.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.645520 kernel: audit: type=1130 audit(1707436465.632:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.784078 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:54:25.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.793198 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:54:25.806513 kernel: audit: type=1130 audit(1707436465.791:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.852346 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:54:25.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.870474 kernel: audit: type=1130 audit(1707436465.853:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:25.880065 systemd-resolved[1262]: Positive Trust Anchors: Feb 8 23:54:25.880081 systemd-resolved[1262]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:54:25.880122 systemd-resolved[1262]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:54:25.899618 systemd-timesyncd[1263]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Feb 8 23:54:25.899687 systemd-timesyncd[1263]: Initial clock synchronization to Thu 2024-02-08 23:54:25.901820 UTC. Feb 8 23:54:26.400503 systemd-resolved[1262]: Using system hostname 'ci-3510.3.2-a-b1d3c6d57d'. Feb 8 23:54:26.402521 systemd[1]: Started systemd-resolved.service. Feb 8 23:54:26.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:54:26.405249 systemd[1]: Reached target network.target. Feb 8 23:54:26.407558 systemd[1]: Reached target network-online.target. Feb 8 23:54:26.410017 systemd[1]: Reached target nss-lookup.target. Feb 8 23:54:26.457000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:54:26.457000 audit[1280]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd60e1f390 a2=420 a3=0 items=0 ppid=1259 pid=1280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:54:26.457000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:54:26.459386 augenrules[1280]: No rules Feb 8 23:54:26.460174 systemd[1]: Finished audit-rules.service. Feb 8 23:54:33.402506 ldconfig[1244]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:54:33.420934 systemd[1]: Finished ldconfig.service. Feb 8 23:54:33.425674 systemd[1]: Starting systemd-update-done.service... Feb 8 23:54:33.464339 systemd[1]: Finished systemd-update-done.service. Feb 8 23:54:33.466868 systemd[1]: Reached target sysinit.target. Feb 8 23:54:33.468864 systemd[1]: Started motdgen.path. Feb 8 23:54:33.470354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:54:33.473110 systemd[1]: Started logrotate.timer. Feb 8 23:54:33.475020 systemd[1]: Started mdadm.timer. Feb 8 23:54:33.476578 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:54:33.478526 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:54:33.478563 systemd[1]: Reached target paths.target. Feb 8 23:54:33.480271 systemd[1]: Reached target timers.target. Feb 8 23:54:33.482699 systemd[1]: Listening on dbus.socket. Feb 8 23:54:33.485374 systemd[1]: Starting docker.socket... Feb 8 23:54:33.489764 systemd[1]: Listening on sshd.socket. Feb 8 23:54:33.491836 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:54:33.492243 systemd[1]: Listening on docker.socket. Feb 8 23:54:33.494166 systemd[1]: Reached target sockets.target. Feb 8 23:54:33.495993 systemd[1]: Reached target basic.target. Feb 8 23:54:33.498035 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:54:33.498069 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:54:33.498999 systemd[1]: Starting containerd.service... Feb 8 23:54:33.502490 systemd[1]: Starting dbus.service... Feb 8 23:54:33.505273 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:54:33.508513 systemd[1]: Starting extend-filesystems.service... Feb 8 23:54:33.510852 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:54:33.512162 systemd[1]: Starting motdgen.service... Feb 8 23:54:33.517646 systemd[1]: Started nvidia.service. Feb 8 23:54:33.520867 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:54:33.524468 systemd[1]: Starting prepare-critools.service... Feb 8 23:54:33.527861 systemd[1]: Starting prepare-helm.service... Feb 8 23:54:33.531169 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:54:33.535477 systemd[1]: Starting sshd-keygen.service... Feb 8 23:54:33.541945 systemd[1]: Starting systemd-logind.service... Feb 8 23:54:33.546950 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:54:33.547034 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:54:33.547693 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:54:33.548570 systemd[1]: Starting update-engine.service... Feb 8 23:54:33.553741 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:54:33.563848 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:54:33.564125 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:54:33.645806 extend-filesystems[1291]: Found sda Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda1 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda2 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda3 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found usr Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda4 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda6 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda7 Feb 8 23:54:33.648628 extend-filesystems[1291]: Found sda9 Feb 8 23:54:33.648628 extend-filesystems[1291]: Checking size of /dev/sda9 Feb 8 23:54:33.692120 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:54:33.692365 systemd[1]: Finished motdgen.service. Feb 8 23:54:33.740311 jq[1290]: false Feb 8 23:54:33.740661 jq[1308]: true Feb 8 23:54:33.740712 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:54:33.740951 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:54:34.145333 jq[1321]: true Feb 8 23:54:34.215320 env[1316]: time="2024-02-08T23:54:34.215272873Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:54:34.229225 env[1316]: time="2024-02-08T23:54:34.229178562Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:54:34.229371 env[1316]: time="2024-02-08T23:54:34.229332275Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230529 env[1316]: time="2024-02-08T23:54:34.230492074Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230529 env[1316]: time="2024-02-08T23:54:34.230520177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230782 env[1316]: time="2024-02-08T23:54:34.230757097Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230782 env[1316]: time="2024-02-08T23:54:34.230779399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230867 env[1316]: time="2024-02-08T23:54:34.230796300Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:54:34.230867 env[1316]: time="2024-02-08T23:54:34.230808702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.230944 env[1316]: time="2024-02-08T23:54:34.230905710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.231163 env[1316]: time="2024-02-08T23:54:34.231132429Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:54:34.231342 env[1316]: time="2024-02-08T23:54:34.231314545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:54:34.231342 env[1316]: time="2024-02-08T23:54:34.231334146Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:54:34.231434 env[1316]: time="2024-02-08T23:54:34.231396152Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:54:34.231434 env[1316]: time="2024-02-08T23:54:34.231410653Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:54:34.241387 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:54:34.243586 systemd-logind[1304]: New seat seat0. Feb 8 23:54:34.290273 tar[1310]: ./ Feb 8 23:54:34.290273 tar[1310]: ./macvlan Feb 8 23:54:34.368365 tar[1312]: linux-amd64/helm Feb 8 23:54:34.368682 tar[1311]: crictl Feb 8 23:54:34.915180 extend-filesystems[1291]: Old size kept for /dev/sda9 Feb 8 23:54:34.915180 extend-filesystems[1291]: Found sr0 Feb 8 23:54:34.919649 tar[1310]: ./static Feb 8 23:54:34.737858 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:54:34.738053 systemd[1]: Finished extend-filesystems.service. Feb 8 23:54:34.983376 dbus-daemon[1289]: [system] SELinux support is enabled Feb 8 23:54:34.983603 systemd[1]: Started dbus.service. Feb 8 23:54:34.992758 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:54:34.992790 systemd[1]: Reached target system-config.target. Feb 8 23:54:34.994161 dbus-daemon[1289]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:54:34.997413 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:54:34.997443 systemd[1]: Reached target user-config.target. Feb 8 23:54:34.999724 systemd[1]: Started systemd-logind.service. Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.016737628Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.016803034Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.016821335Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.016908142Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.016987748Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017047553Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017070655Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017102558Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017122459Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017140961Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017213267Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017243769Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017406082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:54:35.018872 env[1316]: time="2024-02-08T23:54:35.017536292Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:54:35.019390 tar[1310]: ./vlan Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.017996429Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018043033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018063735Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018137741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018156942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018173744Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018255750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018274052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018292653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018309354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018337657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018362459Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018600878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018642981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.019428 env[1316]: time="2024-02-08T23:54:35.018663183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.080725 env[1316]: time="2024-02-08T23:54:35.018681284Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:54:35.080725 env[1316]: time="2024-02-08T23:54:35.018718787Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:54:35.080725 env[1316]: time="2024-02-08T23:54:35.018735589Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:54:35.080725 env[1316]: time="2024-02-08T23:54:35.018760191Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:54:35.080725 env[1316]: time="2024-02-08T23:54:35.018812095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:54:35.021768 systemd[1]: Started containerd.service. Feb 8 23:54:35.050521 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.019767271Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.019882581Z" level=info msg="Connect containerd service" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.019933485Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.020831157Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.020928064Z" level=info msg="Start subscribing containerd event" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.020990369Z" level=info msg="Start recovering state" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021059575Z" level=info msg="Start event monitor" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021078776Z" level=info msg="Start snapshots syncer" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021089777Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021099778Z" level=info msg="Start streaming server" Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021593118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021650322Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:54:35.081737 env[1316]: time="2024-02-08T23:54:35.021703427Z" level=info msg="containerd successfully booted in 0.808366s" Feb 8 23:54:35.152996 tar[1310]: ./portmap Feb 8 23:54:35.227497 bash[1342]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:54:35.228177 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:54:35.281018 tar[1310]: ./host-local Feb 8 23:54:35.352038 tar[1310]: ./vrf Feb 8 23:54:35.423438 tar[1310]: ./bridge Feb 8 23:54:35.513570 tar[1310]: ./tuning Feb 8 23:54:35.585189 tar[1310]: ./firewall Feb 8 23:54:35.674534 tar[1310]: ./host-device Feb 8 23:54:35.757478 tar[1310]: ./sbr Feb 8 23:54:35.828049 tar[1310]: ./loopback Feb 8 23:54:35.890636 tar[1312]: linux-amd64/LICENSE Feb 8 23:54:35.891180 tar[1312]: linux-amd64/README.md Feb 8 23:54:35.901019 tar[1310]: ./dhcp Feb 8 23:54:35.902126 systemd[1]: Finished prepare-helm.service. Feb 8 23:54:35.931808 systemd[1]: Finished prepare-critools.service. Feb 8 23:54:36.011698 tar[1310]: ./ptp Feb 8 23:54:36.015780 sshd_keygen[1313]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:54:36.047070 systemd[1]: Finished sshd-keygen.service. Feb 8 23:54:36.051493 systemd[1]: Starting issuegen.service... Feb 8 23:54:36.055401 systemd[1]: Started waagent.service. Feb 8 23:54:36.064324 tar[1310]: ./ipvlan Feb 8 23:54:36.068819 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:54:36.069004 systemd[1]: Finished issuegen.service. Feb 8 23:54:36.073001 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:54:36.084502 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:54:36.088417 systemd[1]: Started getty@tty1.service. Feb 8 23:54:36.092971 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:54:36.095671 systemd[1]: Reached target getty.target. Feb 8 23:54:36.103963 tar[1310]: ./bandwidth Feb 8 23:54:36.242030 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:54:36.292308 update_engine[1307]: I0208 23:54:36.291891 1307 main.cc:92] Flatcar Update Engine starting Feb 8 23:54:36.355642 systemd[1]: Started update-engine.service. Feb 8 23:54:36.358135 update_engine[1307]: I0208 23:54:36.355674 1307 update_check_scheduler.cc:74] Next update check in 5m45s Feb 8 23:54:36.360786 systemd[1]: Started locksmithd.service. Feb 8 23:54:36.363656 systemd[1]: Reached target multi-user.target. Feb 8 23:54:36.367982 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:54:36.385115 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:54:36.385302 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:54:36.388234 systemd[1]: Startup finished in 1.213s (firmware) + 26.764s (loader) + 930ms (kernel) + 16.967s (initrd) + 27.968s (userspace) = 1min 13.843s. Feb 8 23:54:37.537707 login[1409]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 8 23:54:37.538105 login[1408]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:54:37.576867 systemd[1]: Created slice user-500.slice. Feb 8 23:54:37.578307 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:54:37.583053 systemd-logind[1304]: New session 2 of user core. Feb 8 23:54:37.589636 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:54:37.591499 systemd[1]: Starting user@500.service... Feb 8 23:54:37.595118 (systemd)[1419]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:54:37.753603 systemd[1419]: Queued start job for default target default.target. Feb 8 23:54:37.754306 systemd[1419]: Reached target paths.target. Feb 8 23:54:37.754338 systemd[1419]: Reached target sockets.target. Feb 8 23:54:37.754357 systemd[1419]: Reached target timers.target. Feb 8 23:54:37.754374 systemd[1419]: Reached target basic.target. Feb 8 23:54:37.754441 systemd[1419]: Reached target default.target. Feb 8 23:54:37.754506 systemd[1419]: Startup finished in 153ms. Feb 8 23:54:37.754553 systemd[1]: Started user@500.service. Feb 8 23:54:37.755969 systemd[1]: Started session-2.scope. Feb 8 23:54:38.539741 login[1409]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:54:38.544688 systemd[1]: Started session-1.scope. Feb 8 23:54:38.545180 systemd-logind[1304]: New session 1 of user core. Feb 8 23:54:39.539816 locksmithd[1412]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:54:44.007621 waagent[1403]: 2024-02-08T23:54:44.007497Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:54:44.011675 waagent[1403]: 2024-02-08T23:54:44.011597Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:54:44.014390 waagent[1403]: 2024-02-08T23:54:44.014331Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:54:44.016972 waagent[1403]: 2024-02-08T23:54:44.016902Z INFO Daemon Daemon Run daemon Feb 8 23:54:44.019419 waagent[1403]: 2024-02-08T23:54:44.019357Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:54:44.032295 waagent[1403]: 2024-02-08T23:54:44.032177Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:54:44.039006 waagent[1403]: 2024-02-08T23:54:44.038901Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.040357Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.041094Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.042422Z INFO Daemon Daemon Activate resource disk Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.043155Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.050785Z INFO Daemon Daemon Found device: None Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.051560Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.052328Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.054060Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.054997Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.065245Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.068217Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.069480Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:54:44.082721 waagent[1403]: 2024-02-08T23:54:44.070232Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:54:44.090156 waagent[1403]: 2024-02-08T23:54:44.090044Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:54:44.212221 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:54:44.245904 waagent[1403]: 2024-02-08T23:54:44.245763Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:54:44.249418 waagent[1403]: 2024-02-08T23:54:44.249336Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:54:44.253027 waagent[1403]: 2024-02-08T23:54:44.252944Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:54:44.256862 waagent[1403]: 2024-02-08T23:54:44.256797Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:54:44.259684 waagent[1403]: 2024-02-08T23:54:44.259577Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:54:44.262365 waagent[1403]: 2024-02-08T23:54:44.262303Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:54:44.338578 waagent[1403]: 2024-02-08T23:54:44.338477Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:54:44.347290 waagent[1403]: 2024-02-08T23:54:44.340735Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:54:44.347290 waagent[1403]: 2024-02-08T23:54:44.341632Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:54:44.710693 waagent[1403]: 2024-02-08T23:54:44.710524Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:54:44.721956 waagent[1403]: 2024-02-08T23:54:44.721866Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:54:44.726896 waagent[1403]: 2024-02-08T23:54:44.723090Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:54:44.912496 waagent[1403]: 2024-02-08T23:54:44.912359Z INFO Daemon Daemon Found private key matching thumbprint 0C617C314A6D601EF04C49CE9B66374DDC527E2A Feb 8 23:54:44.922908 waagent[1403]: 2024-02-08T23:54:44.913519Z INFO Daemon Daemon Certificate with thumbprint 5C9AD88E51C8DDCB470FF4EF131450C05D8855D4 has no matching private key. Feb 8 23:54:44.922908 waagent[1403]: 2024-02-08T23:54:44.914263Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:54:44.974125 waagent[1403]: 2024-02-08T23:54:44.973970Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 5c3fe2ca-64dc-4e54-aa8a-036319b78828 New eTag: 17642294980384862564] Feb 8 23:54:44.983059 waagent[1403]: 2024-02-08T23:54:44.976052Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:54:44.990837 waagent[1403]: 2024-02-08T23:54:44.990774Z INFO Daemon Daemon Starting provisioning Feb 8 23:54:44.997493 waagent[1403]: 2024-02-08T23:54:44.991999Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:54:44.997493 waagent[1403]: 2024-02-08T23:54:44.992836Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-b1d3c6d57d] Feb 8 23:54:45.039351 waagent[1403]: 2024-02-08T23:54:45.039197Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-b1d3c6d57d] Feb 8 23:54:45.047020 waagent[1403]: 2024-02-08T23:54:45.041075Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:54:45.047020 waagent[1403]: 2024-02-08T23:54:45.042316Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:54:45.055788 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:54:45.056043 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:54:45.056123 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:54:45.056442 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:54:45.060510 systemd-networkd[1169]: eth0: DHCPv6 lease lost Feb 8 23:54:45.061970 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:54:45.062170 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:54:45.064547 systemd[1]: Starting systemd-networkd.service... Feb 8 23:54:45.096365 systemd-networkd[1462]: enP59385s1: Link UP Feb 8 23:54:45.096375 systemd-networkd[1462]: enP59385s1: Gained carrier Feb 8 23:54:45.097756 systemd-networkd[1462]: eth0: Link UP Feb 8 23:54:45.097766 systemd-networkd[1462]: eth0: Gained carrier Feb 8 23:54:45.098195 systemd-networkd[1462]: lo: Link UP Feb 8 23:54:45.098205 systemd-networkd[1462]: lo: Gained carrier Feb 8 23:54:45.098626 systemd-networkd[1462]: eth0: Gained IPv6LL Feb 8 23:54:45.099273 systemd-networkd[1462]: Enumeration completed Feb 8 23:54:45.099483 systemd[1]: Started systemd-networkd.service. Feb 8 23:54:45.111198 waagent[1403]: 2024-02-08T23:54:45.100886Z INFO Daemon Daemon Create user account if not exists Feb 8 23:54:45.111198 waagent[1403]: 2024-02-08T23:54:45.105235Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:54:45.111198 waagent[1403]: 2024-02-08T23:54:45.106576Z INFO Daemon Daemon Configure sudoer Feb 8 23:54:45.104358 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:54:45.109714 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:54:45.127444 waagent[1403]: 2024-02-08T23:54:45.127336Z INFO Daemon Daemon Configure sshd Feb 8 23:54:45.131978 waagent[1403]: 2024-02-08T23:54:45.128830Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:54:45.148531 systemd-networkd[1462]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:54:45.152104 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:54:45.167248 waagent[1403]: 2024-02-08T23:54:45.167126Z INFO Daemon Daemon Decode custom data Feb 8 23:54:45.170009 waagent[1403]: 2024-02-08T23:54:45.169934Z INFO Daemon Daemon Save custom data Feb 8 23:54:46.396656 waagent[1403]: 2024-02-08T23:54:46.396557Z INFO Daemon Daemon Provisioning complete Feb 8 23:54:46.413088 waagent[1403]: 2024-02-08T23:54:46.413008Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:54:46.416367 waagent[1403]: 2024-02-08T23:54:46.416299Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:54:46.421845 waagent[1403]: 2024-02-08T23:54:46.421779Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:54:46.686036 waagent[1471]: 2024-02-08T23:54:46.685839Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:54:46.686768 waagent[1471]: 2024-02-08T23:54:46.686698Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:46.686915 waagent[1471]: 2024-02-08T23:54:46.686860Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:46.698018 waagent[1471]: 2024-02-08T23:54:46.697940Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:54:46.698187 waagent[1471]: 2024-02-08T23:54:46.698131Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:54:46.759172 waagent[1471]: 2024-02-08T23:54:46.759042Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0C617C314A6D601EF04C49CE9B66374DDC527E2A Feb 8 23:54:46.759406 waagent[1471]: 2024-02-08T23:54:46.759342Z INFO ExtHandler ExtHandler Certificate with thumbprint 5C9AD88E51C8DDCB470FF4EF131450C05D8855D4 has no matching private key. Feb 8 23:54:46.759675 waagent[1471]: 2024-02-08T23:54:46.759620Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:54:46.773503 waagent[1471]: 2024-02-08T23:54:46.773432Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 08eba7ce-f3a6-4465-8b6c-19cabffd15c1 New eTag: 17642294980384862564] Feb 8 23:54:46.774066 waagent[1471]: 2024-02-08T23:54:46.774006Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:54:46.834369 waagent[1471]: 2024-02-08T23:54:46.834200Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:54:46.844754 waagent[1471]: 2024-02-08T23:54:46.844672Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1471 Feb 8 23:54:46.848214 waagent[1471]: 2024-02-08T23:54:46.848140Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:54:46.849492 waagent[1471]: 2024-02-08T23:54:46.849418Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:54:46.934182 waagent[1471]: 2024-02-08T23:54:46.934117Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:54:46.934644 waagent[1471]: 2024-02-08T23:54:46.934572Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:54:46.942437 waagent[1471]: 2024-02-08T23:54:46.942342Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:54:46.942874 waagent[1471]: 2024-02-08T23:54:46.942818Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:54:46.943952 waagent[1471]: 2024-02-08T23:54:46.943888Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:54:46.945207 waagent[1471]: 2024-02-08T23:54:46.945148Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:54:46.945796 waagent[1471]: 2024-02-08T23:54:46.945740Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:46.946207 waagent[1471]: 2024-02-08T23:54:46.946153Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:54:46.946788 waagent[1471]: 2024-02-08T23:54:46.946736Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:46.946890 waagent[1471]: 2024-02-08T23:54:46.946830Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:46.947384 waagent[1471]: 2024-02-08T23:54:46.947325Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:54:46.947601 waagent[1471]: 2024-02-08T23:54:46.947547Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:54:46.948250 waagent[1471]: 2024-02-08T23:54:46.948194Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:54:46.948676 waagent[1471]: 2024-02-08T23:54:46.948615Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:54:46.949129 waagent[1471]: 2024-02-08T23:54:46.949084Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:46.949380 waagent[1471]: 2024-02-08T23:54:46.949313Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:54:46.949874 waagent[1471]: 2024-02-08T23:54:46.949813Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:54:46.949874 waagent[1471]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:54:46.949874 waagent[1471]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:54:46.949874 waagent[1471]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:54:46.949874 waagent[1471]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:46.949874 waagent[1471]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:46.949874 waagent[1471]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:46.950398 waagent[1471]: 2024-02-08T23:54:46.950334Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:54:46.950857 waagent[1471]: 2024-02-08T23:54:46.950805Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:54:46.952928 waagent[1471]: 2024-02-08T23:54:46.952728Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:54:46.953389 waagent[1471]: 2024-02-08T23:54:46.953325Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:54:46.968229 waagent[1471]: 2024-02-08T23:54:46.968169Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:54:46.968851 waagent[1471]: 2024-02-08T23:54:46.968801Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:54:46.969680 waagent[1471]: 2024-02-08T23:54:46.969621Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:54:46.993471 waagent[1471]: 2024-02-08T23:54:46.993356Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1462' Feb 8 23:54:47.021386 waagent[1471]: 2024-02-08T23:54:47.021318Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:54:47.085409 waagent[1471]: 2024-02-08T23:54:47.085301Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:54:47.085409 waagent[1471]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:54:47.085409 waagent[1471]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:54:47.085409 waagent[1471]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:f0:cc brd ff:ff:ff:ff:ff:ff Feb 8 23:54:47.085409 waagent[1471]: 3: enP59385s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:f0:cc brd ff:ff:ff:ff:ff:ff\ altname enP59385p0s2 Feb 8 23:54:47.085409 waagent[1471]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:54:47.085409 waagent[1471]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:54:47.085409 waagent[1471]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:54:47.085409 waagent[1471]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:54:47.085409 waagent[1471]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:54:47.085409 waagent[1471]: 2: eth0 inet6 fe80::20d:3aff:fe64:f0cc/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:54:47.321185 waagent[1471]: 2024-02-08T23:54:47.321057Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:54:47.425921 waagent[1403]: 2024-02-08T23:54:47.425606Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:54:47.432315 waagent[1403]: 2024-02-08T23:54:47.432247Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:54:48.401406 waagent[1510]: 2024-02-08T23:54:48.401289Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:54:48.402122 waagent[1510]: 2024-02-08T23:54:48.402052Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:54:48.402267 waagent[1510]: 2024-02-08T23:54:48.402214Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:54:48.411865 waagent[1510]: 2024-02-08T23:54:48.411768Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:54:48.412238 waagent[1510]: 2024-02-08T23:54:48.412182Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:48.412399 waagent[1510]: 2024-02-08T23:54:48.412350Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:48.423928 waagent[1510]: 2024-02-08T23:54:48.423848Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:54:48.432263 waagent[1510]: 2024-02-08T23:54:48.432203Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:54:48.433153 waagent[1510]: 2024-02-08T23:54:48.433094Z INFO ExtHandler Feb 8 23:54:48.433299 waagent[1510]: 2024-02-08T23:54:48.433250Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1acf6d14-0f30-4a06-bd98-7e9b83da5aae eTag: 17642294980384862564 source: Fabric] Feb 8 23:54:48.433998 waagent[1510]: 2024-02-08T23:54:48.433939Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:54:48.435087 waagent[1510]: 2024-02-08T23:54:48.435026Z INFO ExtHandler Feb 8 23:54:48.435219 waagent[1510]: 2024-02-08T23:54:48.435167Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:54:48.441510 waagent[1510]: 2024-02-08T23:54:48.441455Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:54:48.441919 waagent[1510]: 2024-02-08T23:54:48.441872Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:54:48.462941 waagent[1510]: 2024-02-08T23:54:48.462868Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:54:48.528688 waagent[1510]: 2024-02-08T23:54:48.528555Z INFO ExtHandler Downloaded certificate {'thumbprint': '5C9AD88E51C8DDCB470FF4EF131450C05D8855D4', 'hasPrivateKey': False} Feb 8 23:54:48.529662 waagent[1510]: 2024-02-08T23:54:48.529595Z INFO ExtHandler Downloaded certificate {'thumbprint': '0C617C314A6D601EF04C49CE9B66374DDC527E2A', 'hasPrivateKey': True} Feb 8 23:54:48.530626 waagent[1510]: 2024-02-08T23:54:48.530566Z INFO ExtHandler Fetch goal state completed Feb 8 23:54:48.551812 waagent[1510]: 2024-02-08T23:54:48.551738Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1510 Feb 8 23:54:48.555060 waagent[1510]: 2024-02-08T23:54:48.554996Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:54:48.556537 waagent[1510]: 2024-02-08T23:54:48.556480Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:54:48.561360 waagent[1510]: 2024-02-08T23:54:48.561305Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:54:48.561731 waagent[1510]: 2024-02-08T23:54:48.561675Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:54:48.569593 waagent[1510]: 2024-02-08T23:54:48.569542Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:54:48.570039 waagent[1510]: 2024-02-08T23:54:48.569983Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:54:48.575908 waagent[1510]: 2024-02-08T23:54:48.575816Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:54:48.580470 waagent[1510]: 2024-02-08T23:54:48.580406Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:54:48.581828 waagent[1510]: 2024-02-08T23:54:48.581769Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:54:48.582273 waagent[1510]: 2024-02-08T23:54:48.582217Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:48.582428 waagent[1510]: 2024-02-08T23:54:48.582380Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:48.582985 waagent[1510]: 2024-02-08T23:54:48.582925Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:54:48.583264 waagent[1510]: 2024-02-08T23:54:48.583211Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:54:48.583264 waagent[1510]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:54:48.583264 waagent[1510]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:54:48.583264 waagent[1510]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:54:48.583264 waagent[1510]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:48.583264 waagent[1510]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:48.583264 waagent[1510]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:54:48.585480 waagent[1510]: 2024-02-08T23:54:48.585348Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:54:48.586688 waagent[1510]: 2024-02-08T23:54:48.586629Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:54:48.586920 waagent[1510]: 2024-02-08T23:54:48.586866Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:54:48.587397 waagent[1510]: 2024-02-08T23:54:48.587339Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:54:48.590615 waagent[1510]: 2024-02-08T23:54:48.590328Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:54:48.590821 waagent[1510]: 2024-02-08T23:54:48.590754Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:54:48.591211 waagent[1510]: 2024-02-08T23:54:48.591146Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:54:48.591681 waagent[1510]: 2024-02-08T23:54:48.591624Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:54:48.591906 waagent[1510]: 2024-02-08T23:54:48.591840Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:54:48.592171 waagent[1510]: 2024-02-08T23:54:48.592116Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:54:48.592427 waagent[1510]: 2024-02-08T23:54:48.592373Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:54:48.614361 waagent[1510]: 2024-02-08T23:54:48.614297Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:54:48.614361 waagent[1510]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:54:48.614361 waagent[1510]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:54:48.614361 waagent[1510]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:f0:cc brd ff:ff:ff:ff:ff:ff Feb 8 23:54:48.614361 waagent[1510]: 3: enP59385s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:64:f0:cc brd ff:ff:ff:ff:ff:ff\ altname enP59385p0s2 Feb 8 23:54:48.614361 waagent[1510]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:54:48.614361 waagent[1510]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:54:48.614361 waagent[1510]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:54:48.614361 waagent[1510]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:54:48.614361 waagent[1510]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:54:48.614361 waagent[1510]: 2: eth0 inet6 fe80::20d:3aff:fe64:f0cc/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:54:48.623180 waagent[1510]: 2024-02-08T23:54:48.623092Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:54:48.624184 waagent[1510]: 2024-02-08T23:54:48.624126Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:54:48.676430 waagent[1510]: 2024-02-08T23:54:48.676320Z INFO ExtHandler ExtHandler Feb 8 23:54:48.676742 waagent[1510]: 2024-02-08T23:54:48.676687Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: fbe5e65a-1376-43be-aeb1-67ffe3d93858 correlation 5f22ed70-3638-4451-abac-e9e80675b156 created: 2024-02-08T23:53:12.093712Z] Feb 8 23:54:48.677484 waagent[1510]: 2024-02-08T23:54:48.677414Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:54:48.679817 waagent[1510]: 2024-02-08T23:54:48.679758Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 8 23:54:48.722804 waagent[1510]: 2024-02-08T23:54:48.722729Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:54:48.727186 waagent[1510]: 2024-02-08T23:54:48.727102Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:54:48.727186 waagent[1510]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.727186 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.727186 waagent[1510]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.727186 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.727186 waagent[1510]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.727186 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.727186 waagent[1510]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:54:48.727186 waagent[1510]: 3 156 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:54:48.727186 waagent[1510]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:54:48.741446 waagent[1510]: 2024-02-08T23:54:48.741374Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:54:48.741446 waagent[1510]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.741446 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.741446 waagent[1510]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.741446 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.741446 waagent[1510]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:54:48.741446 waagent[1510]: pkts bytes target prot opt in out source destination Feb 8 23:54:48.741446 waagent[1510]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:54:48.741446 waagent[1510]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:54:48.741446 waagent[1510]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:54:48.742111 waagent[1510]: 2024-02-08T23:54:48.742051Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6163AE3A-3C02-4ABE-946D-2CB9100AA461;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:54:48.742653 waagent[1510]: 2024-02-08T23:54:48.742599Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:55:09.984341 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:55:20.619109 systemd[1]: Created slice system-sshd.slice. Feb 8 23:55:20.620877 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.12.6:59876.service. Feb 8 23:55:21.454985 sshd[1555]: Accepted publickey for core from 10.200.12.6 port 59876 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:55:21.456754 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:55:21.460625 systemd-logind[1304]: New session 3 of user core. Feb 8 23:55:21.462218 systemd[1]: Started session-3.scope. Feb 8 23:55:21.523488 update_engine[1307]: I0208 23:55:21.523371 1307 update_attempter.cc:509] Updating boot flags... Feb 8 23:55:21.989402 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.12.6:59892.service. Feb 8 23:55:22.606871 sshd[1599]: Accepted publickey for core from 10.200.12.6 port 59892 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:55:22.608562 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:55:22.613304 systemd[1]: Started session-4.scope. Feb 8 23:55:22.613921 systemd-logind[1304]: New session 4 of user core. Feb 8 23:55:23.046822 sshd[1599]: pam_unix(sshd:session): session closed for user core Feb 8 23:55:23.050293 systemd[1]: sshd@1-10.200.8.17:22-10.200.12.6:59892.service: Deactivated successfully. Feb 8 23:55:23.051356 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:55:23.052109 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:55:23.053026 systemd-logind[1304]: Removed session 4. Feb 8 23:55:23.153016 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.12.6:59906.service. Feb 8 23:55:23.773580 sshd[1605]: Accepted publickey for core from 10.200.12.6 port 59906 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:55:23.775180 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:55:23.780293 systemd[1]: Started session-5.scope. Feb 8 23:55:23.780926 systemd-logind[1304]: New session 5 of user core. Feb 8 23:55:24.225310 sshd[1605]: pam_unix(sshd:session): session closed for user core Feb 8 23:55:24.228446 systemd[1]: sshd@2-10.200.8.17:22-10.200.12.6:59906.service: Deactivated successfully. Feb 8 23:55:24.229343 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:55:24.229954 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:55:24.230721 systemd-logind[1304]: Removed session 5. Feb 8 23:55:24.330063 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.12.6:59912.service. Feb 8 23:55:24.947802 sshd[1611]: Accepted publickey for core from 10.200.12.6 port 59912 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:55:24.949553 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:55:24.954555 systemd-logind[1304]: New session 6 of user core. Feb 8 23:55:24.954655 systemd[1]: Started session-6.scope. Feb 8 23:55:25.388102 sshd[1611]: pam_unix(sshd:session): session closed for user core Feb 8 23:55:25.391543 systemd[1]: sshd@3-10.200.8.17:22-10.200.12.6:59912.service: Deactivated successfully. Feb 8 23:55:25.392575 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:55:25.393315 systemd-logind[1304]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:55:25.394265 systemd-logind[1304]: Removed session 6. Feb 8 23:55:25.492804 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.12.6:59920.service. Feb 8 23:55:26.115811 sshd[1617]: Accepted publickey for core from 10.200.12.6 port 59920 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:55:26.117575 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:55:26.123372 systemd[1]: Started session-7.scope. Feb 8 23:55:26.123962 systemd-logind[1304]: New session 7 of user core. Feb 8 23:55:26.685830 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:55:26.686161 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:55:27.568173 systemd[1]: Starting docker.service... Feb 8 23:55:27.624195 env[1635]: time="2024-02-08T23:55:27.624143965Z" level=info msg="Starting up" Feb 8 23:55:27.625652 env[1635]: time="2024-02-08T23:55:27.625624569Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:55:27.625652 env[1635]: time="2024-02-08T23:55:27.625642969Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:55:27.625810 env[1635]: time="2024-02-08T23:55:27.625662469Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:55:27.625810 env[1635]: time="2024-02-08T23:55:27.625676069Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:55:27.627512 env[1635]: time="2024-02-08T23:55:27.627442774Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:55:27.627615 env[1635]: time="2024-02-08T23:55:27.627600475Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:55:27.627700 env[1635]: time="2024-02-08T23:55:27.627683675Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:55:27.627766 env[1635]: time="2024-02-08T23:55:27.627753875Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:55:27.634433 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2921825513-merged.mount: Deactivated successfully. Feb 8 23:55:27.681814 env[1635]: time="2024-02-08T23:55:27.681769026Z" level=info msg="Loading containers: start." Feb 8 23:55:27.809477 kernel: Initializing XFRM netlink socket Feb 8 23:55:27.833826 env[1635]: time="2024-02-08T23:55:27.833710351Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:55:27.941914 systemd-networkd[1462]: docker0: Link UP Feb 8 23:55:27.963967 env[1635]: time="2024-02-08T23:55:27.963921315Z" level=info msg="Loading containers: done." Feb 8 23:55:27.974599 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1582425847-merged.mount: Deactivated successfully. Feb 8 23:55:27.987090 env[1635]: time="2024-02-08T23:55:27.987046380Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:55:27.987272 env[1635]: time="2024-02-08T23:55:27.987246780Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:55:27.987376 env[1635]: time="2024-02-08T23:55:27.987355881Z" level=info msg="Daemon has completed initialization" Feb 8 23:55:28.014715 systemd[1]: Started docker.service. Feb 8 23:55:28.025010 env[1635]: time="2024-02-08T23:55:28.024948182Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:55:28.042090 systemd[1]: Reloading. Feb 8 23:55:28.122244 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2024-02-08T23:55:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:55:28.122708 /usr/lib/systemd/system-generators/torcx-generator[1764]: time="2024-02-08T23:55:28Z" level=info msg="torcx already run" Feb 8 23:55:28.204750 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:55:28.204770 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:55:28.222885 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:55:28.307420 systemd[1]: Started kubelet.service. Feb 8 23:55:28.384210 kubelet[1825]: E0208 23:55:28.384081 1825 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:55:28.386284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:55:28.386443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:55:32.425918 env[1316]: time="2024-02-08T23:55:32.425859539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:55:33.088280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534261274.mount: Deactivated successfully. Feb 8 23:55:35.198641 env[1316]: time="2024-02-08T23:55:35.198576938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:35.208568 env[1316]: time="2024-02-08T23:55:35.208522104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:35.215784 env[1316]: time="2024-02-08T23:55:35.215750371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:35.221686 env[1316]: time="2024-02-08T23:55:35.221647789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:35.222513 env[1316]: time="2024-02-08T23:55:35.222481119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:55:35.232924 env[1316]: time="2024-02-08T23:55:35.232892003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:55:37.267229 env[1316]: time="2024-02-08T23:55:37.267161607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:37.276415 env[1316]: time="2024-02-08T23:55:37.276320526Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:37.281230 env[1316]: time="2024-02-08T23:55:37.281199197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:37.285467 env[1316]: time="2024-02-08T23:55:37.285426944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:37.286075 env[1316]: time="2024-02-08T23:55:37.286039866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:55:37.296493 env[1316]: time="2024-02-08T23:55:37.296466829Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:55:38.581437 env[1316]: time="2024-02-08T23:55:38.581368130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:38.593129 env[1316]: time="2024-02-08T23:55:38.593085828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:38.598822 env[1316]: time="2024-02-08T23:55:38.598781721Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:38.604677 env[1316]: time="2024-02-08T23:55:38.604642620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:38.605429 env[1316]: time="2024-02-08T23:55:38.605385346Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:55:38.609950 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:55:38.610220 systemd[1]: Stopped kubelet.service. Feb 8 23:55:38.612566 systemd[1]: Started kubelet.service. Feb 8 23:55:38.620968 env[1316]: time="2024-02-08T23:55:38.620934373Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:55:38.667744 kubelet[1863]: E0208 23:55:38.667698 1863 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:55:38.670954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:55:38.671074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:55:39.781724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869060766.mount: Deactivated successfully. Feb 8 23:55:40.270539 env[1316]: time="2024-02-08T23:55:40.270475976Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.274800 env[1316]: time="2024-02-08T23:55:40.274757213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.278660 env[1316]: time="2024-02-08T23:55:40.278575036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.282151 env[1316]: time="2024-02-08T23:55:40.282116550Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.282616 env[1316]: time="2024-02-08T23:55:40.282586765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:55:40.292629 env[1316]: time="2024-02-08T23:55:40.292605887Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:55:40.774329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824906456.mount: Deactivated successfully. Feb 8 23:55:40.839023 env[1316]: time="2024-02-08T23:55:40.838951850Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.847240 env[1316]: time="2024-02-08T23:55:40.847189315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.851332 env[1316]: time="2024-02-08T23:55:40.851293347Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.857548 env[1316]: time="2024-02-08T23:55:40.857514247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:40.857983 env[1316]: time="2024-02-08T23:55:40.857951661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:55:40.868670 env[1316]: time="2024-02-08T23:55:40.868637305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:55:41.685231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount648158334.mount: Deactivated successfully. Feb 8 23:55:45.833636 env[1316]: time="2024-02-08T23:55:45.833572494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:45.841416 env[1316]: time="2024-02-08T23:55:45.841372913Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:45.846406 env[1316]: time="2024-02-08T23:55:45.846374854Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:45.850998 env[1316]: time="2024-02-08T23:55:45.850964783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:45.851656 env[1316]: time="2024-02-08T23:55:45.851623301Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:55:45.862000 env[1316]: time="2024-02-08T23:55:45.861967292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:55:46.509744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2611350256.mount: Deactivated successfully. Feb 8 23:55:47.121175 env[1316]: time="2024-02-08T23:55:47.121108066Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:47.127628 env[1316]: time="2024-02-08T23:55:47.127584838Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:47.131758 env[1316]: time="2024-02-08T23:55:47.131719949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:47.136684 env[1316]: time="2024-02-08T23:55:47.136653380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:47.137158 env[1316]: time="2024-02-08T23:55:47.137124193Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:55:48.860002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:55:48.860280 systemd[1]: Stopped kubelet.service. Feb 8 23:55:48.862339 systemd[1]: Started kubelet.service. Feb 8 23:55:48.945490 kubelet[1937]: E0208 23:55:48.945421 1937 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:55:48.948323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:55:48.948499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:55:50.041930 systemd[1]: Stopped kubelet.service. Feb 8 23:55:50.056479 systemd[1]: Reloading. Feb 8 23:55:50.142328 /usr/lib/systemd/system-generators/torcx-generator[1970]: time="2024-02-08T23:55:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:55:50.142365 /usr/lib/systemd/system-generators/torcx-generator[1970]: time="2024-02-08T23:55:50Z" level=info msg="torcx already run" Feb 8 23:55:50.222200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:55:50.222220 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:55:50.240341 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:55:50.332495 systemd[1]: Started kubelet.service. Feb 8 23:55:50.382158 kubelet[2029]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:55:50.382158 kubelet[2029]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:55:50.382634 kubelet[2029]: I0208 23:55:50.382203 2029 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:55:50.383534 kubelet[2029]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:55:50.383534 kubelet[2029]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:55:50.628566 kubelet[2029]: I0208 23:55:50.628531 2029 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:55:50.628566 kubelet[2029]: I0208 23:55:50.628557 2029 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:55:50.628865 kubelet[2029]: I0208 23:55:50.628844 2029 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:55:50.634470 kubelet[2029]: I0208 23:55:50.634425 2029 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:55:50.634910 kubelet[2029]: E0208 23:55:50.634887 2029 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.638259 kubelet[2029]: I0208 23:55:50.638236 2029 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:55:50.638522 kubelet[2029]: I0208 23:55:50.638502 2029 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:55:50.638593 kubelet[2029]: I0208 23:55:50.638584 2029 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:55:50.638719 kubelet[2029]: I0208 23:55:50.638611 2029 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:55:50.638719 kubelet[2029]: I0208 23:55:50.638638 2029 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:55:50.638801 kubelet[2029]: I0208 23:55:50.638733 2029 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:55:50.641645 kubelet[2029]: I0208 23:55:50.641626 2029 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:55:50.641734 kubelet[2029]: I0208 23:55:50.641656 2029 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:55:50.641734 kubelet[2029]: I0208 23:55:50.641690 2029 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:55:50.641734 kubelet[2029]: I0208 23:55:50.641709 2029 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:55:50.642511 kubelet[2029]: I0208 23:55:50.642492 2029 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:55:50.642810 kubelet[2029]: W0208 23:55:50.642791 2029 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:55:50.643262 kubelet[2029]: I0208 23:55:50.643241 2029 server.go:1186] "Started kubelet" Feb 8 23:55:50.643405 kubelet[2029]: W0208 23:55:50.643365 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b1d3c6d57d&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.643485 kubelet[2029]: E0208 23:55:50.643421 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b1d3c6d57d&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.647357 kubelet[2029]: W0208 23:55:50.647326 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.647469 kubelet[2029]: E0208 23:55:50.647459 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.647669 kubelet[2029]: E0208 23:55:50.647584 2029 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b1d3c6d57d.17b2087f159b6335", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b1d3c6d57d", UID:"ci-3510.3.2-a-b1d3c6d57d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b1d3c6d57d"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 55, 50, 643213109, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 55, 50, 643213109, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.17:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:55:50.648142 kubelet[2029]: I0208 23:55:50.648129 2029 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:55:50.648789 kubelet[2029]: I0208 23:55:50.648775 2029 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:55:50.649936 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:55:50.650423 kubelet[2029]: E0208 23:55:50.650407 2029 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:55:50.650556 kubelet[2029]: E0208 23:55:50.650545 2029 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:55:50.650651 kubelet[2029]: I0208 23:55:50.650437 2029 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:55:50.654567 kubelet[2029]: E0208 23:55:50.654487 2029 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-b1d3c6d57d\" not found" Feb 8 23:55:50.654907 kubelet[2029]: I0208 23:55:50.654874 2029 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:55:50.654983 kubelet[2029]: I0208 23:55:50.654970 2029 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:55:50.655294 kubelet[2029]: W0208 23:55:50.655253 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.655373 kubelet[2029]: E0208 23:55:50.655302 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.655373 kubelet[2029]: E0208 23:55:50.655360 2029 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b1d3c6d57d?timeout=10s": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.745087 kubelet[2029]: I0208 23:55:50.745057 2029 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:55:50.770043 kubelet[2029]: I0208 23:55:50.770013 2029 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:50.770593 kubelet[2029]: E0208 23:55:50.770568 2029 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:50.770932 kubelet[2029]: I0208 23:55:50.770914 2029 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:55:50.770932 kubelet[2029]: I0208 23:55:50.770930 2029 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:55:50.771054 kubelet[2029]: I0208 23:55:50.770949 2029 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:55:50.775940 kubelet[2029]: I0208 23:55:50.775916 2029 policy_none.go:49] "None policy: Start" Feb 8 23:55:50.776492 kubelet[2029]: I0208 23:55:50.776473 2029 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:55:50.776580 kubelet[2029]: I0208 23:55:50.776497 2029 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:55:50.786346 systemd[1]: Created slice kubepods.slice. Feb 8 23:55:50.787682 kubelet[2029]: I0208 23:55:50.787659 2029 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:55:50.787682 kubelet[2029]: I0208 23:55:50.787680 2029 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:55:50.787803 kubelet[2029]: I0208 23:55:50.787702 2029 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:55:50.787803 kubelet[2029]: E0208 23:55:50.787750 2029 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:55:50.791691 kubelet[2029]: W0208 23:55:50.791391 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.791691 kubelet[2029]: E0208 23:55:50.791555 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.795063 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:55:50.797960 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:55:50.805056 kubelet[2029]: I0208 23:55:50.805041 2029 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:55:50.806278 kubelet[2029]: I0208 23:55:50.805710 2029 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:55:50.806756 kubelet[2029]: E0208 23:55:50.806741 2029 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-b1d3c6d57d\" not found" Feb 8 23:55:50.856681 kubelet[2029]: E0208 23:55:50.856620 2029 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b1d3c6d57d?timeout=10s": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:50.888053 kubelet[2029]: I0208 23:55:50.887879 2029 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:50.891369 kubelet[2029]: I0208 23:55:50.891340 2029 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:50.892870 kubelet[2029]: I0208 23:55:50.892853 2029 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:50.894800 kubelet[2029]: I0208 23:55:50.894222 2029 status_manager.go:698] "Failed to get status for pod" podUID=5de01795e6477bdb56fee02dcac4360b pod="kube-system/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d" err="Get \"https://10.200.8.17:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d\": dial tcp 10.200.8.17:6443: connect: connection refused" Feb 8 23:55:50.896481 kubelet[2029]: I0208 23:55:50.896039 2029 status_manager.go:698] "Failed to get status for pod" podUID=81e6b73860727730e503dc2d28c14641 pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" err="Get \"https://10.200.8.17:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\": dial tcp 10.200.8.17:6443: connect: connection refused" Feb 8 23:55:50.897372 kubelet[2029]: I0208 23:55:50.897357 2029 status_manager.go:698] "Failed to get status for pod" podUID=ed2f84c1b0aaa5717f3b41469f4b776f pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" err="Get \"https://10.200.8.17:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\": dial tcp 10.200.8.17:6443: connect: connection refused" Feb 8 23:55:50.899794 systemd[1]: Created slice kubepods-burstable-pod5de01795e6477bdb56fee02dcac4360b.slice. Feb 8 23:55:50.907235 systemd[1]: Created slice kubepods-burstable-pod81e6b73860727730e503dc2d28c14641.slice. Feb 8 23:55:50.917579 systemd[1]: Created slice kubepods-burstable-poded2f84c1b0aaa5717f3b41469f4b776f.slice. Feb 8 23:55:50.972885 kubelet[2029]: I0208 23:55:50.972857 2029 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:50.973282 kubelet[2029]: E0208 23:55:50.973249 2029 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.056959 kubelet[2029]: I0208 23:55:51.056911 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057251 kubelet[2029]: I0208 23:55:51.057217 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057335 kubelet[2029]: I0208 23:55:51.057264 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057335 kubelet[2029]: I0208 23:55:51.057307 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057446 kubelet[2029]: I0208 23:55:51.057345 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057446 kubelet[2029]: I0208 23:55:51.057387 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5de01795e6477bdb56fee02dcac4360b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"5de01795e6477bdb56fee02dcac4360b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057446 kubelet[2029]: I0208 23:55:51.057432 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057639 kubelet[2029]: I0208 23:55:51.057498 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.057639 kubelet[2029]: I0208 23:55:51.057543 2029 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.207551 env[1316]: time="2024-02-08T23:55:51.206915587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b1d3c6d57d,Uid:5de01795e6477bdb56fee02dcac4360b,Namespace:kube-system,Attempt:0,}" Feb 8 23:55:51.210634 env[1316]: time="2024-02-08T23:55:51.210591476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b1d3c6d57d,Uid:81e6b73860727730e503dc2d28c14641,Namespace:kube-system,Attempt:0,}" Feb 8 23:55:51.220382 env[1316]: time="2024-02-08T23:55:51.220350610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d,Uid:ed2f84c1b0aaa5717f3b41469f4b776f,Namespace:kube-system,Attempt:0,}" Feb 8 23:55:51.257006 kubelet[2029]: E0208 23:55:51.256975 2029 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b1d3c6d57d?timeout=10s": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:51.375742 kubelet[2029]: I0208 23:55:51.375703 2029 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.376126 kubelet[2029]: E0208 23:55:51.376097 2029 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:51.665531 kubelet[2029]: W0208 23:55:51.665210 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b1d3c6d57d&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:51.665531 kubelet[2029]: E0208 23:55:51.665432 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-b1d3c6d57d&limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:51.812112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893304842.mount: Deactivated successfully. Feb 8 23:55:51.847109 env[1316]: time="2024-02-08T23:55:51.847056565Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.856144 env[1316]: time="2024-02-08T23:55:51.856098682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.872086 kubelet[2029]: W0208 23:55:51.872032 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:51.872086 kubelet[2029]: E0208 23:55:51.872089 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:51.877842 env[1316]: time="2024-02-08T23:55:51.877794403Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.881099 env[1316]: time="2024-02-08T23:55:51.881069082Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.887805 env[1316]: time="2024-02-08T23:55:51.887776443Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.894674 env[1316]: time="2024-02-08T23:55:51.894645008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.899361 env[1316]: time="2024-02-08T23:55:51.899329121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.902598 env[1316]: time="2024-02-08T23:55:51.902565198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.907095 env[1316]: time="2024-02-08T23:55:51.907061906Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.910127 env[1316]: time="2024-02-08T23:55:51.910093579Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.915034 env[1316]: time="2024-02-08T23:55:51.914995997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:51.918084 env[1316]: time="2024-02-08T23:55:51.917989369Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:55:52.004084 kubelet[2029]: W0208 23:55:52.004019 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.004259 kubelet[2029]: E0208 23:55:52.004096 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.009764 env[1316]: time="2024-02-08T23:55:52.009665466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:55:52.009764 env[1316]: time="2024-02-08T23:55:52.009738367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:55:52.014168 env[1316]: time="2024-02-08T23:55:52.009974773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:55:52.015517 env[1316]: time="2024-02-08T23:55:52.015444501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0dc335451643f45a25d817a2dd52bb61427ba5aa7a00bc4b1c262431f93e1b1f pid=2105 runtime=io.containerd.runc.v2 Feb 8 23:55:52.018375 env[1316]: time="2024-02-08T23:55:52.018293468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:55:52.018375 env[1316]: time="2024-02-08T23:55:52.018334369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:55:52.018375 env[1316]: time="2024-02-08T23:55:52.018350769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:55:52.019528 env[1316]: time="2024-02-08T23:55:52.018788479Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbe8a531484f89c931f0712cee798a49cc039e11be4e518fe6d03f79c85365c0 pid=2117 runtime=io.containerd.runc.v2 Feb 8 23:55:52.037399 systemd[1]: Started cri-containerd-dbe8a531484f89c931f0712cee798a49cc039e11be4e518fe6d03f79c85365c0.scope. Feb 8 23:55:52.056422 systemd[1]: Started cri-containerd-0dc335451643f45a25d817a2dd52bb61427ba5aa7a00bc4b1c262431f93e1b1f.scope. Feb 8 23:55:52.058010 kubelet[2029]: E0208 23:55:52.057963 2029 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-b1d3c6d57d?timeout=10s": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.061087 env[1316]: time="2024-02-08T23:55:52.060991067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:55:52.061306 env[1316]: time="2024-02-08T23:55:52.061269274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:55:52.061427 env[1316]: time="2024-02-08T23:55:52.061396077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:55:52.062187 env[1316]: time="2024-02-08T23:55:52.062093793Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ce9879dd0f9a0c576f7b0b42f7f16d12602ad79104a8f26d80b4bf4b938e776 pid=2146 runtime=io.containerd.runc.v2 Feb 8 23:55:52.082782 systemd[1]: Started cri-containerd-1ce9879dd0f9a0c576f7b0b42f7f16d12602ad79104a8f26d80b4bf4b938e776.scope. Feb 8 23:55:52.134347 env[1316]: time="2024-02-08T23:55:52.133568367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d,Uid:ed2f84c1b0aaa5717f3b41469f4b776f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbe8a531484f89c931f0712cee798a49cc039e11be4e518fe6d03f79c85365c0\"" Feb 8 23:55:52.134958 kubelet[2029]: W0208 23:55:52.134923 2029 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.135082 kubelet[2029]: E0208 23:55:52.134978 2029 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.138424 env[1316]: time="2024-02-08T23:55:52.138386080Z" level=info msg="CreateContainer within sandbox \"dbe8a531484f89c931f0712cee798a49cc039e11be4e518fe6d03f79c85365c0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:55:52.158743 env[1316]: time="2024-02-08T23:55:52.158696955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-b1d3c6d57d,Uid:81e6b73860727730e503dc2d28c14641,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce9879dd0f9a0c576f7b0b42f7f16d12602ad79104a8f26d80b4bf4b938e776\"" Feb 8 23:55:52.161759 env[1316]: time="2024-02-08T23:55:52.161719026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-b1d3c6d57d,Uid:5de01795e6477bdb56fee02dcac4360b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc335451643f45a25d817a2dd52bb61427ba5aa7a00bc4b1c262431f93e1b1f\"" Feb 8 23:55:52.162195 env[1316]: time="2024-02-08T23:55:52.162165036Z" level=info msg="CreateContainer within sandbox \"1ce9879dd0f9a0c576f7b0b42f7f16d12602ad79104a8f26d80b4bf4b938e776\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:55:52.164355 env[1316]: time="2024-02-08T23:55:52.164320687Z" level=info msg="CreateContainer within sandbox \"0dc335451643f45a25d817a2dd52bb61427ba5aa7a00bc4b1c262431f93e1b1f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:55:52.178257 kubelet[2029]: I0208 23:55:52.177843 2029 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:52.178257 kubelet[2029]: E0208 23:55:52.178160 2029 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:52.184924 env[1316]: time="2024-02-08T23:55:52.184885368Z" level=info msg="CreateContainer within sandbox \"dbe8a531484f89c931f0712cee798a49cc039e11be4e518fe6d03f79c85365c0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"115d7c98e0d7d5d8eea88110df62001f3672a2261957d9f83aca6ef9d0ed8b08\"" Feb 8 23:55:52.185503 env[1316]: time="2024-02-08T23:55:52.185478682Z" level=info msg="StartContainer for \"115d7c98e0d7d5d8eea88110df62001f3672a2261957d9f83aca6ef9d0ed8b08\"" Feb 8 23:55:52.212209 systemd[1]: Started cri-containerd-115d7c98e0d7d5d8eea88110df62001f3672a2261957d9f83aca6ef9d0ed8b08.scope. Feb 8 23:55:52.242667 kubelet[2029]: E0208 23:55:52.242524 2029 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-b1d3c6d57d.17b2087f159b6335", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-b1d3c6d57d", UID:"ci-3510.3.2-a-b1d3c6d57d", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-b1d3c6d57d"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 55, 50, 643213109, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 55, 50, 643213109, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.17:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:55:52.683556 env[1316]: time="2024-02-08T23:55:52.683506344Z" level=info msg="StartContainer for \"115d7c98e0d7d5d8eea88110df62001f3672a2261957d9f83aca6ef9d0ed8b08\" returns successfully" Feb 8 23:55:52.700283 env[1316]: time="2024-02-08T23:55:52.700231235Z" level=info msg="CreateContainer within sandbox \"1ce9879dd0f9a0c576f7b0b42f7f16d12602ad79104a8f26d80b4bf4b938e776\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"daa534be19f606f4b2c7cf68196817d8e36e3fefce3ccfb357fde621d6cada87\"" Feb 8 23:55:52.700906 env[1316]: time="2024-02-08T23:55:52.700878050Z" level=info msg="StartContainer for \"daa534be19f606f4b2c7cf68196817d8e36e3fefce3ccfb357fde621d6cada87\"" Feb 8 23:55:52.705223 env[1316]: time="2024-02-08T23:55:52.705186551Z" level=info msg="CreateContainer within sandbox \"0dc335451643f45a25d817a2dd52bb61427ba5aa7a00bc4b1c262431f93e1b1f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"be8cb215f9bf840f061e4778ef6dd8a20cd9da6adadde6eff7948615460ca834\"" Feb 8 23:55:52.705828 env[1316]: time="2024-02-08T23:55:52.705803166Z" level=info msg="StartContainer for \"be8cb215f9bf840f061e4778ef6dd8a20cd9da6adadde6eff7948615460ca834\"" Feb 8 23:55:52.730852 systemd[1]: Started cri-containerd-be8cb215f9bf840f061e4778ef6dd8a20cd9da6adadde6eff7948615460ca834.scope. Feb 8 23:55:52.758511 systemd[1]: Started cri-containerd-daa534be19f606f4b2c7cf68196817d8e36e3fefce3ccfb357fde621d6cada87.scope. Feb 8 23:55:52.774775 kubelet[2029]: E0208 23:55:52.774718 2029 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.17:6443: connect: connection refused Feb 8 23:55:52.817965 kubelet[2029]: I0208 23:55:52.817809 2029 status_manager.go:698] "Failed to get status for pod" podUID=ed2f84c1b0aaa5717f3b41469f4b776f pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" err="Get \"https://10.200.8.17:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\": dial tcp 10.200.8.17:6443: connect: connection refused" Feb 8 23:55:52.850888 env[1316]: time="2024-02-08T23:55:52.850835762Z" level=info msg="StartContainer for \"daa534be19f606f4b2c7cf68196817d8e36e3fefce3ccfb357fde621d6cada87\" returns successfully" Feb 8 23:55:52.889894 env[1316]: time="2024-02-08T23:55:52.889835475Z" level=info msg="StartContainer for \"be8cb215f9bf840f061e4778ef6dd8a20cd9da6adadde6eff7948615460ca834\" returns successfully" Feb 8 23:55:53.780212 kubelet[2029]: I0208 23:55:53.780158 2029 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:54.846459 kubelet[2029]: E0208 23:55:54.846404 2029 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-b1d3c6d57d\" not found" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:54.873706 kubelet[2029]: I0208 23:55:54.873666 2029 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:55.645062 kubelet[2029]: I0208 23:55:55.644999 2029 apiserver.go:52] "Watching apiserver" Feb 8 23:55:55.655224 kubelet[2029]: I0208 23:55:55.655196 2029 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:55:55.686418 kubelet[2029]: I0208 23:55:55.686381 2029 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:55:58.450984 systemd[1]: Reloading. Feb 8 23:55:58.517714 /usr/lib/systemd/system-generators/torcx-generator[2366]: time="2024-02-08T23:55:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:55:58.518200 /usr/lib/systemd/system-generators/torcx-generator[2366]: time="2024-02-08T23:55:58Z" level=info msg="torcx already run" Feb 8 23:55:58.626168 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:55:58.626188 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:55:58.644721 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:55:58.753335 kubelet[2029]: I0208 23:55:58.753230 2029 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:55:58.753650 systemd[1]: Stopping kubelet.service... Feb 8 23:55:58.767928 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:55:58.768154 systemd[1]: Stopped kubelet.service. Feb 8 23:55:58.770212 systemd[1]: Started kubelet.service. Feb 8 23:55:58.849206 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:55:58.849206 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:55:58.849673 kubelet[2425]: I0208 23:55:58.849293 2425 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:55:58.852477 kubelet[2425]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:55:58.852477 kubelet[2425]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:55:58.856408 kubelet[2425]: I0208 23:55:58.856391 2425 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:55:58.856537 kubelet[2425]: I0208 23:55:58.856524 2425 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:55:58.856915 kubelet[2425]: I0208 23:55:58.856901 2425 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:55:58.858174 kubelet[2425]: I0208 23:55:58.858157 2425 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:55:58.859023 kubelet[2425]: I0208 23:55:58.859003 2425 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:55:58.863469 kubelet[2425]: I0208 23:55:58.863438 2425 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:55:58.863664 kubelet[2425]: I0208 23:55:58.863645 2425 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:55:58.863765 kubelet[2425]: I0208 23:55:58.863731 2425 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:55:58.863890 kubelet[2425]: I0208 23:55:58.863779 2425 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:55:58.863890 kubelet[2425]: I0208 23:55:58.863794 2425 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:55:58.863890 kubelet[2425]: I0208 23:55:58.863842 2425 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:55:58.866989 kubelet[2425]: I0208 23:55:58.866972 2425 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:55:58.867079 kubelet[2425]: I0208 23:55:58.866992 2425 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:55:58.867079 kubelet[2425]: I0208 23:55:58.867018 2425 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:55:58.867079 kubelet[2425]: I0208 23:55:58.867038 2425 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:55:58.877687 kubelet[2425]: I0208 23:55:58.874291 2425 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:55:58.877687 kubelet[2425]: I0208 23:55:58.874871 2425 server.go:1186] "Started kubelet" Feb 8 23:55:58.877687 kubelet[2425]: I0208 23:55:58.877104 2425 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:55:58.881633 kubelet[2425]: I0208 23:55:58.881618 2425 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:55:58.886797 kubelet[2425]: I0208 23:55:58.886784 2425 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:55:58.889761 kubelet[2425]: I0208 23:55:58.889744 2425 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:55:58.897720 kubelet[2425]: I0208 23:55:58.897706 2425 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:55:58.898290 kubelet[2425]: E0208 23:55:58.898272 2425 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:55:58.898406 kubelet[2425]: E0208 23:55:58.898395 2425 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:55:58.931444 kubelet[2425]: I0208 23:55:58.931417 2425 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:55:58.959959 kubelet[2425]: I0208 23:55:58.958988 2425 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:55:58.959959 kubelet[2425]: I0208 23:55:58.959023 2425 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:55:58.959959 kubelet[2425]: I0208 23:55:58.959042 2425 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:55:58.959959 kubelet[2425]: E0208 23:55:58.959109 2425 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:55:58.972757 kubelet[2425]: I0208 23:55:58.972736 2425 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:55:58.972757 kubelet[2425]: I0208 23:55:58.972756 2425 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:55:58.972943 kubelet[2425]: I0208 23:55:58.972776 2425 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:55:58.972943 kubelet[2425]: I0208 23:55:58.972938 2425 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:55:58.973027 kubelet[2425]: I0208 23:55:58.972955 2425 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:55:58.973027 kubelet[2425]: I0208 23:55:58.972964 2425 policy_none.go:49] "None policy: Start" Feb 8 23:55:58.973648 kubelet[2425]: I0208 23:55:58.973625 2425 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:55:58.973648 kubelet[2425]: I0208 23:55:58.973651 2425 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:55:58.973810 kubelet[2425]: I0208 23:55:58.973799 2425 state_mem.go:75] "Updated machine memory state" Feb 8 23:55:58.977405 kubelet[2425]: I0208 23:55:58.977382 2425 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:55:58.979767 kubelet[2425]: I0208 23:55:58.979650 2425 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:55:58.992825 kubelet[2425]: I0208 23:55:58.992810 2425 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.004839 kubelet[2425]: I0208 23:55:59.004779 2425 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.004999 kubelet[2425]: I0208 23:55:59.004988 2425 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.059609 kubelet[2425]: I0208 23:55:59.059576 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:59.059898 kubelet[2425]: I0208 23:55:59.059880 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:59.060665 kubelet[2425]: I0208 23:55:59.060644 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:55:59.075294 kubelet[2425]: E0208 23:55:59.075267 2425 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.098856 kubelet[2425]: I0208 23:55:59.098835 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.098970 kubelet[2425]: I0208 23:55:59.098875 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.098970 kubelet[2425]: I0208 23:55:59.098904 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.098970 kubelet[2425]: I0208 23:55:59.098951 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.099100 kubelet[2425]: I0208 23:55:59.098985 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5de01795e6477bdb56fee02dcac4360b-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"5de01795e6477bdb56fee02dcac4360b\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.099100 kubelet[2425]: I0208 23:55:59.099012 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.099100 kubelet[2425]: I0208 23:55:59.099043 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.099100 kubelet[2425]: I0208 23:55:59.099072 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81e6b73860727730e503dc2d28c14641-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"81e6b73860727730e503dc2d28c14641\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.099253 kubelet[2425]: I0208 23:55:59.099105 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed2f84c1b0aaa5717f3b41469f4b776f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" (UID: \"ed2f84c1b0aaa5717f3b41469f4b776f\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:55:59.760800 sudo[2476]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:55:59.761075 sudo[2476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:55:59.867614 kubelet[2425]: I0208 23:55:59.867578 2425 apiserver.go:52] "Watching apiserver" Feb 8 23:55:59.898477 kubelet[2425]: I0208 23:55:59.898430 2425 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:55:59.904828 kubelet[2425]: I0208 23:55:59.904791 2425 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:56:00.076219 kubelet[2425]: E0208 23:56:00.076098 2425 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-b1d3c6d57d\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:56:00.310647 sudo[2476]: pam_unix(sudo:session): session closed for user root Feb 8 23:56:00.474498 kubelet[2425]: E0208 23:56:00.474445 2425 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:56:00.675185 kubelet[2425]: E0208 23:56:00.675145 2425 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-b1d3c6d57d\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d" Feb 8 23:56:01.275057 kubelet[2425]: I0208 23:56:01.275019 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-b1d3c6d57d" podStartSLOduration=6.274961724 pod.CreationTimestamp="2024-02-08 23:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:00.888700536 +0000 UTC m=+2.110354287" watchObservedRunningTime="2024-02-08 23:56:01.274961724 +0000 UTC m=+2.496615575" Feb 8 23:56:01.677040 kubelet[2425]: I0208 23:56:01.677002 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-b1d3c6d57d" podStartSLOduration=2.676955455 pod.CreationTimestamp="2024-02-08 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:01.673677694 +0000 UTC m=+2.895331545" watchObservedRunningTime="2024-02-08 23:56:01.676955455 +0000 UTC m=+2.898609206" Feb 8 23:56:01.677381 kubelet[2425]: I0208 23:56:01.677356 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-b1d3c6d57d" podStartSLOduration=2.677324562 pod.CreationTimestamp="2024-02-08 23:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:01.27691806 +0000 UTC m=+2.498571811" watchObservedRunningTime="2024-02-08 23:56:01.677324562 +0000 UTC m=+2.898978313" Feb 8 23:56:02.138410 sudo[1620]: pam_unix(sudo:session): session closed for user root Feb 8 23:56:02.267305 sshd[1617]: pam_unix(sshd:session): session closed for user core Feb 8 23:56:02.270853 systemd[1]: sshd@4-10.200.8.17:22-10.200.12.6:59920.service: Deactivated successfully. Feb 8 23:56:02.272058 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:56:02.272329 systemd[1]: session-7.scope: Consumed 4.053s CPU time. Feb 8 23:56:02.273064 systemd-logind[1304]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:56:02.274001 systemd-logind[1304]: Removed session 7. Feb 8 23:56:11.997126 kubelet[2425]: I0208 23:56:11.997093 2425 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:56:11.997912 env[1316]: time="2024-02-08T23:56:11.997870655Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:56:11.998280 kubelet[2425]: I0208 23:56:11.998229 2425 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:56:12.426877 kubelet[2425]: I0208 23:56:12.426837 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:56:12.433368 systemd[1]: Created slice kubepods-besteffort-pod7a9e0799_48c4_4773_830f_f7fe37b8ac8d.slice. Feb 8 23:56:12.462893 kubelet[2425]: I0208 23:56:12.462852 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:56:12.469289 systemd[1]: Created slice kubepods-burstable-pod9fdc9f98_e24b_4dd4_8931_d49b429f16cf.slice. Feb 8 23:56:12.482771 kubelet[2425]: I0208 23:56:12.482738 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.482918 kubelet[2425]: I0208 23:56:12.482798 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-kernel\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.482918 kubelet[2425]: I0208 23:56:12.482830 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a9e0799-48c4-4773-830f-f7fe37b8ac8d-xtables-lock\") pod \"kube-proxy-qmwb6\" (UID: \"7a9e0799-48c4-4773-830f-f7fe37b8ac8d\") " pod="kube-system/kube-proxy-qmwb6" Feb 8 23:56:12.482918 kubelet[2425]: I0208 23:56:12.482856 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-run\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.482918 kubelet[2425]: I0208 23:56:12.482900 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-etc-cni-netd\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483153 kubelet[2425]: I0208 23:56:12.482928 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483153 kubelet[2425]: I0208 23:56:12.482971 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-cgroup\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483153 kubelet[2425]: I0208 23:56:12.483004 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cni-path\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483153 kubelet[2425]: I0208 23:56:12.483049 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7a9e0799-48c4-4773-830f-f7fe37b8ac8d-kube-proxy\") pod \"kube-proxy-qmwb6\" (UID: \"7a9e0799-48c4-4773-830f-f7fe37b8ac8d\") " pod="kube-system/kube-proxy-qmwb6" Feb 8 23:56:12.483153 kubelet[2425]: I0208 23:56:12.483082 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrfm5\" (UniqueName: \"kubernetes.io/projected/7a9e0799-48c4-4773-830f-f7fe37b8ac8d-kube-api-access-lrfm5\") pod \"kube-proxy-qmwb6\" (UID: \"7a9e0799-48c4-4773-830f-f7fe37b8ac8d\") " pod="kube-system/kube-proxy-qmwb6" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483129 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a9e0799-48c4-4773-830f-f7fe37b8ac8d-lib-modules\") pod \"kube-proxy-qmwb6\" (UID: \"7a9e0799-48c4-4773-830f-f7fe37b8ac8d\") " pod="kube-system/kube-proxy-qmwb6" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483159 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hostproc\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483203 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-net\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483233 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483275 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-lib-modules\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483356 kubelet[2425]: I0208 23:56:12.483305 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-bpf-maps\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483613 kubelet[2425]: I0208 23:56:12.483332 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-xtables-lock\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.483613 kubelet[2425]: I0208 23:56:12.483380 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k52tm\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-kube-api-access-k52tm\") pod \"cilium-zz22j\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " pod="kube-system/cilium-zz22j" Feb 8 23:56:12.484743 kubelet[2425]: W0208 23:56:12.484701 2425 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.484844 kubelet[2425]: E0208 23:56:12.484751 2425 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.484844 kubelet[2425]: W0208 23:56:12.484812 2425 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.484844 kubelet[2425]: E0208 23:56:12.484826 2425 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.484982 kubelet[2425]: W0208 23:56:12.484866 2425 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.484982 kubelet[2425]: E0208 23:56:12.484888 2425 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:56:12.968662 kubelet[2425]: I0208 23:56:12.968622 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:56:12.974773 systemd[1]: Created slice kubepods-besteffort-pod0bb15dd4_9cda_47d8_a81f_0eed42b04021.slice. Feb 8 23:56:12.986872 kubelet[2425]: I0208 23:56:12.986839 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bb15dd4-9cda-47d8-a81f-0eed42b04021-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-lmmzx\" (UID: \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\") " pod="kube-system/cilium-operator-f59cbd8c6-lmmzx" Feb 8 23:56:12.987030 kubelet[2425]: I0208 23:56:12.986927 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc64t\" (UniqueName: \"kubernetes.io/projected/0bb15dd4-9cda-47d8-a81f-0eed42b04021-kube-api-access-hc64t\") pod \"cilium-operator-f59cbd8c6-lmmzx\" (UID: \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\") " pod="kube-system/cilium-operator-f59cbd8c6-lmmzx" Feb 8 23:56:13.038367 env[1316]: time="2024-02-08T23:56:13.038314398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmwb6,Uid:7a9e0799-48c4-4773-830f-f7fe37b8ac8d,Namespace:kube-system,Attempt:0,}" Feb 8 23:56:13.072027 env[1316]: time="2024-02-08T23:56:13.071962777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:56:13.072226 env[1316]: time="2024-02-08T23:56:13.072001778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:56:13.072226 env[1316]: time="2024-02-08T23:56:13.072014678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:56:13.072364 env[1316]: time="2024-02-08T23:56:13.072205781Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496 pid=2528 runtime=io.containerd.runc.v2 Feb 8 23:56:13.093396 systemd[1]: Started cri-containerd-0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496.scope. Feb 8 23:56:13.123779 env[1316]: time="2024-02-08T23:56:13.123724415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmwb6,Uid:7a9e0799-48c4-4773-830f-f7fe37b8ac8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496\"" Feb 8 23:56:13.128810 env[1316]: time="2024-02-08T23:56:13.128773187Z" level=info msg="CreateContainer within sandbox \"0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:56:13.162476 env[1316]: time="2024-02-08T23:56:13.162393166Z" level=info msg="CreateContainer within sandbox \"0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f63f40f6e803003c278e5839cbbac204c93e7a1fdcbe1f54e58258f14c3a1feb\"" Feb 8 23:56:13.164035 env[1316]: time="2024-02-08T23:56:13.163982689Z" level=info msg="StartContainer for \"f63f40f6e803003c278e5839cbbac204c93e7a1fdcbe1f54e58258f14c3a1feb\"" Feb 8 23:56:13.183442 systemd[1]: Started cri-containerd-f63f40f6e803003c278e5839cbbac204c93e7a1fdcbe1f54e58258f14c3a1feb.scope. Feb 8 23:56:13.218840 env[1316]: time="2024-02-08T23:56:13.218703169Z" level=info msg="StartContainer for \"f63f40f6e803003c278e5839cbbac204c93e7a1fdcbe1f54e58258f14c3a1feb\" returns successfully" Feb 8 23:56:13.585205 kubelet[2425]: E0208 23:56:13.585021 2425 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:56:13.585686 kubelet[2425]: E0208 23:56:13.585589 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path podName:9fdc9f98-e24b-4dd4-8931-d49b429f16cf nodeName:}" failed. No retries permitted until 2024-02-08 23:56:14.085561497 +0000 UTC m=+15.307215248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path") pod "cilium-zz22j" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:56:13.585792 kubelet[2425]: E0208 23:56:13.585042 2425 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 8 23:56:13.585792 kubelet[2425]: E0208 23:56:13.585783 2425 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-zz22j: failed to sync secret cache: timed out waiting for the condition Feb 8 23:56:13.585872 kubelet[2425]: E0208 23:56:13.585844 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls podName:9fdc9f98-e24b-4dd4-8931-d49b429f16cf nodeName:}" failed. No retries permitted until 2024-02-08 23:56:14.085826701 +0000 UTC m=+15.307480452 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls") pod "cilium-zz22j" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf") : failed to sync secret cache: timed out waiting for the condition Feb 8 23:56:13.585974 kubelet[2425]: E0208 23:56:13.585956 2425 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 8 23:56:13.586049 kubelet[2425]: E0208 23:56:13.586008 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets podName:9fdc9f98-e24b-4dd4-8931-d49b429f16cf nodeName:}" failed. No retries permitted until 2024-02-08 23:56:14.085996203 +0000 UTC m=+15.307649954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets") pod "cilium-zz22j" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf") : failed to sync secret cache: timed out waiting for the condition Feb 8 23:56:13.776043 systemd[1]: run-containerd-runc-k8s.io-0ccffad2739db7688384c53685f3c90de6549ac8ce54fa27fd2dbd891818c496-runc.t5SAs9.mount: Deactivated successfully. Feb 8 23:56:14.181192 env[1316]: time="2024-02-08T23:56:14.181133831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-lmmzx,Uid:0bb15dd4-9cda-47d8-a81f-0eed42b04021,Namespace:kube-system,Attempt:0,}" Feb 8 23:56:14.241614 env[1316]: time="2024-02-08T23:56:14.241533574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:56:14.241614 env[1316]: time="2024-02-08T23:56:14.241577775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:56:14.241614 env[1316]: time="2024-02-08T23:56:14.241591775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:56:14.242089 env[1316]: time="2024-02-08T23:56:14.242033381Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6 pid=2712 runtime=io.containerd.runc.v2 Feb 8 23:56:14.264866 systemd[1]: Started cri-containerd-09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6.scope. Feb 8 23:56:14.309926 env[1316]: time="2024-02-08T23:56:14.309870727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-lmmzx,Uid:0bb15dd4-9cda-47d8-a81f-0eed42b04021,Namespace:kube-system,Attempt:0,} returns sandbox id \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:56:14.313311 env[1316]: time="2024-02-08T23:56:14.313271675Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:56:14.575810 env[1316]: time="2024-02-08T23:56:14.575687936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zz22j,Uid:9fdc9f98-e24b-4dd4-8931-d49b429f16cf,Namespace:kube-system,Attempt:0,}" Feb 8 23:56:14.617079 env[1316]: time="2024-02-08T23:56:14.616999612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:56:14.617079 env[1316]: time="2024-02-08T23:56:14.617040112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:56:14.617079 env[1316]: time="2024-02-08T23:56:14.617054313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:56:14.617510 env[1316]: time="2024-02-08T23:56:14.617430618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2 pid=2752 runtime=io.containerd.runc.v2 Feb 8 23:56:14.630102 systemd[1]: Started cri-containerd-9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2.scope. Feb 8 23:56:14.656624 env[1316]: time="2024-02-08T23:56:14.656585664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zz22j,Uid:9fdc9f98-e24b-4dd4-8931-d49b429f16cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:56:14.779631 systemd[1]: run-containerd-runc-k8s.io-09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6-runc.U4lxt6.mount: Deactivated successfully. Feb 8 23:56:15.818198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount193354198.mount: Deactivated successfully. Feb 8 23:56:16.571816 env[1316]: time="2024-02-08T23:56:16.571756260Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:16.576387 env[1316]: time="2024-02-08T23:56:16.576346621Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:16.579556 env[1316]: time="2024-02-08T23:56:16.579521763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:16.580037 env[1316]: time="2024-02-08T23:56:16.579998570Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:56:16.581806 env[1316]: time="2024-02-08T23:56:16.581766893Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:56:16.583200 env[1316]: time="2024-02-08T23:56:16.583168912Z" level=info msg="CreateContainer within sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:56:16.609108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1547708720.mount: Deactivated successfully. Feb 8 23:56:16.620302 env[1316]: time="2024-02-08T23:56:16.620261108Z" level=info msg="CreateContainer within sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\"" Feb 8 23:56:16.622351 env[1316]: time="2024-02-08T23:56:16.621639127Z" level=info msg="StartContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\"" Feb 8 23:56:16.638729 systemd[1]: Started cri-containerd-d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1.scope. Feb 8 23:56:16.681425 env[1316]: time="2024-02-08T23:56:16.681375526Z" level=info msg="StartContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" returns successfully" Feb 8 23:56:17.033710 kubelet[2425]: I0208 23:56:17.033666 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-lmmzx" podStartSLOduration=-9.223372031821165e+09 pod.CreationTimestamp="2024-02-08 23:56:12 +0000 UTC" firstStartedPulling="2024-02-08 23:56:14.311151745 +0000 UTC m=+15.532805496" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:17.032626115 +0000 UTC m=+18.254279966" watchObservedRunningTime="2024-02-08 23:56:17.033611627 +0000 UTC m=+18.255265378" Feb 8 23:56:17.034264 kubelet[2425]: I0208 23:56:17.033895 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qmwb6" podStartSLOduration=5.033863731 pod.CreationTimestamp="2024-02-08 23:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:14.567468321 +0000 UTC m=+15.789122072" watchObservedRunningTime="2024-02-08 23:56:17.033863731 +0000 UTC m=+18.255517582" Feb 8 23:56:22.043523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727996277.mount: Deactivated successfully. Feb 8 23:56:24.772847 env[1316]: time="2024-02-08T23:56:24.772799891Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:24.781692 env[1316]: time="2024-02-08T23:56:24.781645992Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:24.786682 env[1316]: time="2024-02-08T23:56:24.786641048Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:56:24.787221 env[1316]: time="2024-02-08T23:56:24.787172655Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:56:24.789814 env[1316]: time="2024-02-08T23:56:24.789780284Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:56:24.820861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875629075.mount: Deactivated successfully. Feb 8 23:56:24.836929 env[1316]: time="2024-02-08T23:56:24.836878621Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\"" Feb 8 23:56:24.837509 env[1316]: time="2024-02-08T23:56:24.837478828Z" level=info msg="StartContainer for \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\"" Feb 8 23:56:24.862416 systemd[1]: Started cri-containerd-ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543.scope. Feb 8 23:56:24.905684 env[1316]: time="2024-02-08T23:56:24.905634005Z" level=info msg="StartContainer for \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\" returns successfully" Feb 8 23:56:24.909228 systemd[1]: cri-containerd-ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543.scope: Deactivated successfully. Feb 8 23:56:25.819335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543-rootfs.mount: Deactivated successfully. Feb 8 23:56:29.031175 env[1316]: time="2024-02-08T23:56:29.031090252Z" level=info msg="shim disconnected" id=ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543 Feb 8 23:56:29.031867 env[1316]: time="2024-02-08T23:56:29.031838759Z" level=warning msg="cleaning up after shim disconnected" id=ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543 namespace=k8s.io Feb 8 23:56:29.031990 env[1316]: time="2024-02-08T23:56:29.031972461Z" level=info msg="cleaning up dead shim" Feb 8 23:56:29.040474 env[1316]: time="2024-02-08T23:56:29.040422849Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:56:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2874 runtime=io.containerd.runc.v2\n" Feb 8 23:56:29.055177 env[1316]: time="2024-02-08T23:56:29.055140302Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:56:29.098075 env[1316]: time="2024-02-08T23:56:29.097973447Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\"" Feb 8 23:56:29.101124 env[1316]: time="2024-02-08T23:56:29.098963758Z" level=info msg="StartContainer for \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\"" Feb 8 23:56:29.124515 systemd[1]: run-containerd-runc-k8s.io-16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf-runc.oYnOUl.mount: Deactivated successfully. Feb 8 23:56:29.129131 systemd[1]: Started cri-containerd-16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf.scope. Feb 8 23:56:29.163639 env[1316]: time="2024-02-08T23:56:29.163598130Z" level=info msg="StartContainer for \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\" returns successfully" Feb 8 23:56:29.170995 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:56:29.171532 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:56:29.171720 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:56:29.174003 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:56:29.180083 systemd[1]: cri-containerd-16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf.scope: Deactivated successfully. Feb 8 23:56:29.184416 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:56:29.217338 env[1316]: time="2024-02-08T23:56:29.217295388Z" level=info msg="shim disconnected" id=16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf Feb 8 23:56:29.217558 env[1316]: time="2024-02-08T23:56:29.217337489Z" level=warning msg="cleaning up after shim disconnected" id=16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf namespace=k8s.io Feb 8 23:56:29.217558 env[1316]: time="2024-02-08T23:56:29.217352489Z" level=info msg="cleaning up dead shim" Feb 8 23:56:29.225002 env[1316]: time="2024-02-08T23:56:29.224975868Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:56:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2940 runtime=io.containerd.runc.v2\n" Feb 8 23:56:30.055313 env[1316]: time="2024-02-08T23:56:30.055267995Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:56:30.085088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf-rootfs.mount: Deactivated successfully. Feb 8 23:56:30.096532 env[1316]: time="2024-02-08T23:56:30.096481317Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\"" Feb 8 23:56:30.097181 env[1316]: time="2024-02-08T23:56:30.097140023Z" level=info msg="StartContainer for \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\"" Feb 8 23:56:30.133617 systemd[1]: run-containerd-runc-k8s.io-f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f-runc.cyNGFW.mount: Deactivated successfully. Feb 8 23:56:30.136882 systemd[1]: Started cri-containerd-f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f.scope. Feb 8 23:56:30.168885 systemd[1]: cri-containerd-f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f.scope: Deactivated successfully. Feb 8 23:56:30.176346 env[1316]: time="2024-02-08T23:56:30.176236932Z" level=info msg="StartContainer for \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\" returns successfully" Feb 8 23:56:30.206214 env[1316]: time="2024-02-08T23:56:30.206125137Z" level=info msg="shim disconnected" id=f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f Feb 8 23:56:30.206214 env[1316]: time="2024-02-08T23:56:30.206214138Z" level=warning msg="cleaning up after shim disconnected" id=f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f namespace=k8s.io Feb 8 23:56:30.206530 env[1316]: time="2024-02-08T23:56:30.206226538Z" level=info msg="cleaning up dead shim" Feb 8 23:56:30.215117 env[1316]: time="2024-02-08T23:56:30.215078029Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:56:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2993 runtime=io.containerd.runc.v2\n" Feb 8 23:56:31.061882 env[1316]: time="2024-02-08T23:56:31.060026654Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:56:31.084665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f-rootfs.mount: Deactivated successfully. Feb 8 23:56:31.102870 env[1316]: time="2024-02-08T23:56:31.102763483Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\"" Feb 8 23:56:31.103671 env[1316]: time="2024-02-08T23:56:31.103635592Z" level=info msg="StartContainer for \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\"" Feb 8 23:56:31.126593 systemd[1]: run-containerd-runc-k8s.io-48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0-runc.Q2ZFw5.mount: Deactivated successfully. Feb 8 23:56:31.131906 systemd[1]: Started cri-containerd-48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0.scope. Feb 8 23:56:31.160267 systemd[1]: cri-containerd-48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0.scope: Deactivated successfully. Feb 8 23:56:31.170705 env[1316]: time="2024-02-08T23:56:31.170665865Z" level=info msg="StartContainer for \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\" returns successfully" Feb 8 23:56:31.196976 env[1316]: time="2024-02-08T23:56:31.196916329Z" level=info msg="shim disconnected" id=48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0 Feb 8 23:56:31.196976 env[1316]: time="2024-02-08T23:56:31.196973129Z" level=warning msg="cleaning up after shim disconnected" id=48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0 namespace=k8s.io Feb 8 23:56:31.197234 env[1316]: time="2024-02-08T23:56:31.196984930Z" level=info msg="cleaning up dead shim" Feb 8 23:56:31.204819 env[1316]: time="2024-02-08T23:56:31.204781608Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:56:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3050 runtime=io.containerd.runc.v2\n" Feb 8 23:56:32.067278 env[1316]: time="2024-02-08T23:56:32.067227459Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:56:32.084759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0-rootfs.mount: Deactivated successfully. Feb 8 23:56:32.112489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349682606.mount: Deactivated successfully. Feb 8 23:56:32.118741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount744680325.mount: Deactivated successfully. Feb 8 23:56:32.131648 env[1316]: time="2024-02-08T23:56:32.131605595Z" level=info msg="CreateContainer within sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\"" Feb 8 23:56:32.132481 env[1316]: time="2024-02-08T23:56:32.132271701Z" level=info msg="StartContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\"" Feb 8 23:56:32.148519 systemd[1]: Started cri-containerd-e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec.scope. Feb 8 23:56:32.184094 env[1316]: time="2024-02-08T23:56:32.184053713Z" level=info msg="StartContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" returns successfully" Feb 8 23:56:32.286999 kubelet[2425]: I0208 23:56:32.286956 2425 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:56:32.314298 kubelet[2425]: I0208 23:56:32.314259 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:56:32.320955 systemd[1]: Created slice kubepods-burstable-podff879896_b440_46f6_919e_8c27f133d084.slice. Feb 8 23:56:32.328924 kubelet[2425]: I0208 23:56:32.328792 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:56:32.334761 systemd[1]: Created slice kubepods-burstable-pod0136d7a9_85c0_4509_9bff_06bec9fc1ba3.slice. Feb 8 23:56:32.430552 kubelet[2425]: I0208 23:56:32.430438 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6nbk\" (UniqueName: \"kubernetes.io/projected/ff879896-b440-46f6-919e-8c27f133d084-kube-api-access-k6nbk\") pod \"coredns-787d4945fb-sfg4h\" (UID: \"ff879896-b440-46f6-919e-8c27f133d084\") " pod="kube-system/coredns-787d4945fb-sfg4h" Feb 8 23:56:32.430753 kubelet[2425]: I0208 23:56:32.430597 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0136d7a9-85c0-4509-9bff-06bec9fc1ba3-config-volume\") pod \"coredns-787d4945fb-zsw4p\" (UID: \"0136d7a9-85c0-4509-9bff-06bec9fc1ba3\") " pod="kube-system/coredns-787d4945fb-zsw4p" Feb 8 23:56:32.430753 kubelet[2425]: I0208 23:56:32.430661 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmql9\" (UniqueName: \"kubernetes.io/projected/0136d7a9-85c0-4509-9bff-06bec9fc1ba3-kube-api-access-pmql9\") pod \"coredns-787d4945fb-zsw4p\" (UID: \"0136d7a9-85c0-4509-9bff-06bec9fc1ba3\") " pod="kube-system/coredns-787d4945fb-zsw4p" Feb 8 23:56:32.430753 kubelet[2425]: I0208 23:56:32.430693 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff879896-b440-46f6-919e-8c27f133d084-config-volume\") pod \"coredns-787d4945fb-sfg4h\" (UID: \"ff879896-b440-46f6-919e-8c27f133d084\") " pod="kube-system/coredns-787d4945fb-sfg4h" Feb 8 23:56:32.631473 env[1316]: time="2024-02-08T23:56:32.631404529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-sfg4h,Uid:ff879896-b440-46f6-919e-8c27f133d084,Namespace:kube-system,Attempt:0,}" Feb 8 23:56:32.637731 env[1316]: time="2024-02-08T23:56:32.637678391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zsw4p,Uid:0136d7a9-85c0-4509-9bff-06bec9fc1ba3,Namespace:kube-system,Attempt:0,}" Feb 8 23:56:34.483783 systemd-networkd[1462]: cilium_host: Link UP Feb 8 23:56:34.483969 systemd-networkd[1462]: cilium_net: Link UP Feb 8 23:56:34.483972 systemd-networkd[1462]: cilium_net: Gained carrier Feb 8 23:56:34.488518 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:56:34.489542 systemd-networkd[1462]: cilium_host: Gained carrier Feb 8 23:56:34.723711 systemd-networkd[1462]: cilium_vxlan: Link UP Feb 8 23:56:34.723720 systemd-networkd[1462]: cilium_vxlan: Gained carrier Feb 8 23:56:34.863667 systemd-networkd[1462]: cilium_net: Gained IPv6LL Feb 8 23:56:34.982476 kernel: NET: Registered PF_ALG protocol family Feb 8 23:56:35.247636 systemd-networkd[1462]: cilium_host: Gained IPv6LL Feb 8 23:56:35.765900 systemd-networkd[1462]: lxc_health: Link UP Feb 8 23:56:35.789483 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:56:35.790530 systemd-networkd[1462]: lxc_health: Gained carrier Feb 8 23:56:36.201949 systemd-networkd[1462]: lxcd2cf93e72a3c: Link UP Feb 8 23:56:36.212484 kernel: eth0: renamed from tmpdab33 Feb 8 23:56:36.228174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd2cf93e72a3c: link becomes ready Feb 8 23:56:36.224703 systemd-networkd[1462]: lxcd2cf93e72a3c: Gained carrier Feb 8 23:56:36.229912 systemd-networkd[1462]: lxc8640c39bd049: Link UP Feb 8 23:56:36.243556 kernel: eth0: renamed from tmpfd02c Feb 8 23:56:36.259751 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8640c39bd049: link becomes ready Feb 8 23:56:36.262791 systemd-networkd[1462]: lxc8640c39bd049: Gained carrier Feb 8 23:56:36.335645 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Feb 8 23:56:36.608867 kubelet[2425]: I0208 23:56:36.607868 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-zz22j" podStartSLOduration=-9.22337201224695e+09 pod.CreationTimestamp="2024-02-08 23:56:12 +0000 UTC" firstStartedPulling="2024-02-08 23:56:14.658117385 +0000 UTC m=+15.879771236" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:33.080204948 +0000 UTC m=+34.301858799" watchObservedRunningTime="2024-02-08 23:56:36.607826537 +0000 UTC m=+37.829480288" Feb 8 23:56:37.615670 systemd-networkd[1462]: lxc_health: Gained IPv6LL Feb 8 23:56:37.679674 systemd-networkd[1462]: lxc8640c39bd049: Gained IPv6LL Feb 8 23:56:38.127603 systemd-networkd[1462]: lxcd2cf93e72a3c: Gained IPv6LL Feb 8 23:56:39.969311 env[1316]: time="2024-02-08T23:56:39.963433612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:56:39.969311 env[1316]: time="2024-02-08T23:56:39.963512912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:56:39.969311 env[1316]: time="2024-02-08T23:56:39.963529613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:56:39.969311 env[1316]: time="2024-02-08T23:56:39.963802715Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5 pid=3599 runtime=io.containerd.runc.v2 Feb 8 23:56:39.983904 env[1316]: time="2024-02-08T23:56:39.983839292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:56:39.984153 env[1316]: time="2024-02-08T23:56:39.984119494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:56:39.984283 env[1316]: time="2024-02-08T23:56:39.984257395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:56:39.984565 env[1316]: time="2024-02-08T23:56:39.984526398Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dab33657191d57da983b512155993b52ca1d862f4a48fe9e17314bcb6a47e6c3 pid=3615 runtime=io.containerd.runc.v2 Feb 8 23:56:40.008121 systemd[1]: run-containerd-runc-k8s.io-fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5-runc.tnSQwJ.mount: Deactivated successfully. Feb 8 23:56:40.021838 systemd[1]: Started cri-containerd-fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5.scope. Feb 8 23:56:40.037065 systemd[1]: Started cri-containerd-dab33657191d57da983b512155993b52ca1d862f4a48fe9e17314bcb6a47e6c3.scope. Feb 8 23:56:40.102049 env[1316]: time="2024-02-08T23:56:40.101970220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-sfg4h,Uid:ff879896-b440-46f6-919e-8c27f133d084,Namespace:kube-system,Attempt:0,} returns sandbox id \"dab33657191d57da983b512155993b52ca1d862f4a48fe9e17314bcb6a47e6c3\"" Feb 8 23:56:40.107907 env[1316]: time="2024-02-08T23:56:40.107861471Z" level=info msg="CreateContainer within sandbox \"dab33657191d57da983b512155993b52ca1d862f4a48fe9e17314bcb6a47e6c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:56:40.146175 env[1316]: time="2024-02-08T23:56:40.146121904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-zsw4p,Uid:0136d7a9-85c0-4509-9bff-06bec9fc1ba3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5\"" Feb 8 23:56:40.146730 env[1316]: time="2024-02-08T23:56:40.146673808Z" level=info msg="CreateContainer within sandbox \"dab33657191d57da983b512155993b52ca1d862f4a48fe9e17314bcb6a47e6c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce4e5da1253a01029c21ca5f898e614c6b702fb36630f0f4f346e95b2933d7f6\"" Feb 8 23:56:40.147470 env[1316]: time="2024-02-08T23:56:40.147423315Z" level=info msg="StartContainer for \"ce4e5da1253a01029c21ca5f898e614c6b702fb36630f0f4f346e95b2933d7f6\"" Feb 8 23:56:40.152471 env[1316]: time="2024-02-08T23:56:40.152425558Z" level=info msg="CreateContainer within sandbox \"fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:56:40.171282 systemd[1]: Started cri-containerd-ce4e5da1253a01029c21ca5f898e614c6b702fb36630f0f4f346e95b2933d7f6.scope. Feb 8 23:56:40.189289 env[1316]: time="2024-02-08T23:56:40.189237078Z" level=info msg="CreateContainer within sandbox \"fd02c328a41b9023f8a28619c689f935c4f1e5a32a77fd5ed97cf5ceeb1ce8c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ef2cb868c7b3115dd977cc49010d1b99e81f41c6b8160805f21c626acf725ee\"" Feb 8 23:56:40.189789 env[1316]: time="2024-02-08T23:56:40.189758283Z" level=info msg="StartContainer for \"3ef2cb868c7b3115dd977cc49010d1b99e81f41c6b8160805f21c626acf725ee\"" Feb 8 23:56:40.223643 systemd[1]: Started cri-containerd-3ef2cb868c7b3115dd977cc49010d1b99e81f41c6b8160805f21c626acf725ee.scope. Feb 8 23:56:40.242217 env[1316]: time="2024-02-08T23:56:40.242082237Z" level=info msg="StartContainer for \"ce4e5da1253a01029c21ca5f898e614c6b702fb36630f0f4f346e95b2933d7f6\" returns successfully" Feb 8 23:56:40.308516 env[1316]: time="2024-02-08T23:56:40.308445114Z" level=info msg="StartContainer for \"3ef2cb868c7b3115dd977cc49010d1b99e81f41c6b8160805f21c626acf725ee\" returns successfully" Feb 8 23:56:41.097927 kubelet[2425]: I0208 23:56:41.097889 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-zsw4p" podStartSLOduration=29.097846959 pod.CreationTimestamp="2024-02-08 23:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:41.097151853 +0000 UTC m=+42.318805704" watchObservedRunningTime="2024-02-08 23:56:41.097846959 +0000 UTC m=+42.319500710" Feb 8 23:56:41.138485 kubelet[2425]: I0208 23:56:41.138436 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-sfg4h" podStartSLOduration=29.138388406 pod.CreationTimestamp="2024-02-08 23:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:56:41.136792593 +0000 UTC m=+42.358446344" watchObservedRunningTime="2024-02-08 23:56:41.138388406 +0000 UTC m=+42.360042157" Feb 8 23:58:37.837414 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.12.6:52522.service. Feb 8 23:58:38.450882 sshd[3818]: Accepted publickey for core from 10.200.12.6 port 52522 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:58:38.452837 sshd[3818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:58:38.458178 systemd-logind[1304]: New session 8 of user core. Feb 8 23:58:38.458848 systemd[1]: Started session-8.scope. Feb 8 23:58:39.096475 sshd[3818]: pam_unix(sshd:session): session closed for user core Feb 8 23:58:39.099956 systemd[1]: sshd@5-10.200.8.17:22-10.200.12.6:52522.service: Deactivated successfully. Feb 8 23:58:39.101238 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:58:39.102209 systemd-logind[1304]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:58:39.103186 systemd-logind[1304]: Removed session 8. Feb 8 23:58:44.201444 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.12.6:52530.service. Feb 8 23:58:44.814789 sshd[3855]: Accepted publickey for core from 10.200.12.6 port 52530 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:58:44.816618 sshd[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:58:44.821728 systemd[1]: Started session-9.scope. Feb 8 23:58:44.822338 systemd-logind[1304]: New session 9 of user core. Feb 8 23:58:45.308838 sshd[3855]: pam_unix(sshd:session): session closed for user core Feb 8 23:58:45.311722 systemd[1]: sshd@6-10.200.8.17:22-10.200.12.6:52530.service: Deactivated successfully. Feb 8 23:58:45.312759 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:58:45.313421 systemd-logind[1304]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:58:45.314244 systemd-logind[1304]: Removed session 9. Feb 8 23:58:50.416877 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.12.6:50048.service. Feb 8 23:58:51.038946 sshd[3868]: Accepted publickey for core from 10.200.12.6 port 50048 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:58:51.040707 sshd[3868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:58:51.046474 systemd-logind[1304]: New session 10 of user core. Feb 8 23:58:51.047104 systemd[1]: Started session-10.scope. Feb 8 23:58:51.533393 sshd[3868]: pam_unix(sshd:session): session closed for user core Feb 8 23:58:51.536893 systemd[1]: sshd@7-10.200.8.17:22-10.200.12.6:50048.service: Deactivated successfully. Feb 8 23:58:51.538086 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:58:51.539045 systemd-logind[1304]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:58:51.540082 systemd-logind[1304]: Removed session 10. Feb 8 23:58:56.640433 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.12.6:50054.service. Feb 8 23:58:57.262784 sshd[3880]: Accepted publickey for core from 10.200.12.6 port 50054 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:58:57.264479 sshd[3880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:58:57.270518 systemd[1]: Started session-11.scope. Feb 8 23:58:57.271281 systemd-logind[1304]: New session 11 of user core. Feb 8 23:58:57.766385 sshd[3880]: pam_unix(sshd:session): session closed for user core Feb 8 23:58:57.769985 systemd[1]: sshd@8-10.200.8.17:22-10.200.12.6:50054.service: Deactivated successfully. Feb 8 23:58:57.771187 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:58:57.772091 systemd-logind[1304]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:58:57.772924 systemd-logind[1304]: Removed session 11. Feb 8 23:59:02.872163 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.12.6:56150.service. Feb 8 23:59:03.490309 sshd[3895]: Accepted publickey for core from 10.200.12.6 port 56150 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:03.491747 sshd[3895]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:03.495562 systemd-logind[1304]: New session 12 of user core. Feb 8 23:59:03.497295 systemd[1]: Started session-12.scope. Feb 8 23:59:03.995657 sshd[3895]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:03.998575 systemd[1]: sshd@9-10.200.8.17:22-10.200.12.6:56150.service: Deactivated successfully. Feb 8 23:59:03.999627 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:59:04.000326 systemd-logind[1304]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:59:04.001177 systemd-logind[1304]: Removed session 12. Feb 8 23:59:04.099545 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.12.6:56164.service. Feb 8 23:59:04.712096 sshd[3908]: Accepted publickey for core from 10.200.12.6 port 56164 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:04.713683 sshd[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:04.718757 systemd-logind[1304]: New session 13 of user core. Feb 8 23:59:04.719278 systemd[1]: Started session-13.scope. Feb 8 23:59:05.989118 sshd[3908]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:05.992793 systemd[1]: sshd@10-10.200.8.17:22-10.200.12.6:56164.service: Deactivated successfully. Feb 8 23:59:05.993812 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:59:05.994573 systemd-logind[1304]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:59:05.995422 systemd-logind[1304]: Removed session 13. Feb 8 23:59:06.094304 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.12.6:56180.service. Feb 8 23:59:06.711198 sshd[3919]: Accepted publickey for core from 10.200.12.6 port 56180 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:06.713133 sshd[3919]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:06.719167 systemd[1]: Started session-14.scope. Feb 8 23:59:06.719992 systemd-logind[1304]: New session 14 of user core. Feb 8 23:59:07.207354 sshd[3919]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:07.210690 systemd[1]: sshd@11-10.200.8.17:22-10.200.12.6:56180.service: Deactivated successfully. Feb 8 23:59:07.212036 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:59:07.212078 systemd-logind[1304]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:59:07.213197 systemd-logind[1304]: Removed session 14. Feb 8 23:59:12.319060 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.12.6:44624.service. Feb 8 23:59:12.940002 sshd[3932]: Accepted publickey for core from 10.200.12.6 port 44624 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:12.942048 sshd[3932]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:12.948248 systemd[1]: Started session-15.scope. Feb 8 23:59:12.948728 systemd-logind[1304]: New session 15 of user core. Feb 8 23:59:13.434371 sshd[3932]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:13.437673 systemd-logind[1304]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:59:13.437900 systemd[1]: sshd@12-10.200.8.17:22-10.200.12.6:44624.service: Deactivated successfully. Feb 8 23:59:13.438976 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:59:13.439935 systemd-logind[1304]: Removed session 15. Feb 8 23:59:18.542223 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.12.6:43122.service. Feb 8 23:59:19.159267 sshd[3946]: Accepted publickey for core from 10.200.12.6 port 43122 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:19.160796 sshd[3946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:19.165319 systemd-logind[1304]: New session 16 of user core. Feb 8 23:59:19.166033 systemd[1]: Started session-16.scope. Feb 8 23:59:19.662800 sshd[3946]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:19.666318 systemd[1]: sshd@13-10.200.8.17:22-10.200.12.6:43122.service: Deactivated successfully. Feb 8 23:59:19.667528 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:59:19.668387 systemd-logind[1304]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:59:19.669359 systemd-logind[1304]: Removed session 16. Feb 8 23:59:19.768930 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.12.6:43138.service. Feb 8 23:59:20.385856 sshd[3958]: Accepted publickey for core from 10.200.12.6 port 43138 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:20.387278 sshd[3958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:20.392563 systemd[1]: Started session-17.scope. Feb 8 23:59:20.393015 systemd-logind[1304]: New session 17 of user core. Feb 8 23:59:21.106875 sshd[3958]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:21.110491 systemd[1]: sshd@14-10.200.8.17:22-10.200.12.6:43138.service: Deactivated successfully. Feb 8 23:59:21.111710 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:59:21.112386 systemd-logind[1304]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:59:21.113218 systemd-logind[1304]: Removed session 17. Feb 8 23:59:21.214966 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.12.6:43152.service. Feb 8 23:59:21.837488 sshd[3968]: Accepted publickey for core from 10.200.12.6 port 43152 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:21.838973 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:21.843665 systemd-logind[1304]: New session 18 of user core. Feb 8 23:59:21.844723 systemd[1]: Started session-18.scope. Feb 8 23:59:23.473018 sshd[3968]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:23.476234 systemd[1]: sshd@15-10.200.8.17:22-10.200.12.6:43152.service: Deactivated successfully. Feb 8 23:59:23.477186 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:59:23.477943 systemd-logind[1304]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:59:23.478819 systemd-logind[1304]: Removed session 18. Feb 8 23:59:23.578371 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.12.6:43164.service. Feb 8 23:59:24.196197 sshd[4033]: Accepted publickey for core from 10.200.12.6 port 43164 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:24.197662 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:24.206688 systemd[1]: Started session-19.scope. Feb 8 23:59:24.208033 systemd-logind[1304]: New session 19 of user core. Feb 8 23:59:24.813199 sshd[4033]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:24.816719 systemd[1]: sshd@16-10.200.8.17:22-10.200.12.6:43164.service: Deactivated successfully. Feb 8 23:59:24.817963 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:59:24.819003 systemd-logind[1304]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:59:24.820073 systemd-logind[1304]: Removed session 19. Feb 8 23:59:24.923217 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.12.6:43166.service. Feb 8 23:59:25.534827 sshd[4046]: Accepted publickey for core from 10.200.12.6 port 43166 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:25.538362 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:25.543532 systemd-logind[1304]: New session 20 of user core. Feb 8 23:59:25.544316 systemd[1]: Started session-20.scope. Feb 8 23:59:26.022233 sshd[4046]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:26.025754 systemd[1]: sshd@17-10.200.8.17:22-10.200.12.6:43166.service: Deactivated successfully. Feb 8 23:59:26.026899 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:59:26.027807 systemd-logind[1304]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:59:26.028818 systemd-logind[1304]: Removed session 20. Feb 8 23:59:31.127873 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.12.6:34588.service. Feb 8 23:59:31.749422 sshd[4085]: Accepted publickey for core from 10.200.12.6 port 34588 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:31.751246 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:31.757437 systemd[1]: Started session-21.scope. Feb 8 23:59:31.758235 systemd-logind[1304]: New session 21 of user core. Feb 8 23:59:32.237816 sshd[4085]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:32.241729 systemd[1]: sshd@18-10.200.8.17:22-10.200.12.6:34588.service: Deactivated successfully. Feb 8 23:59:32.242717 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:59:32.243498 systemd-logind[1304]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:59:32.244290 systemd-logind[1304]: Removed session 21. Feb 8 23:59:37.342976 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.12.6:46076.service. Feb 8 23:59:37.966058 sshd[4096]: Accepted publickey for core from 10.200.12.6 port 46076 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:37.967674 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:37.972554 systemd[1]: Started session-22.scope. Feb 8 23:59:37.973075 systemd-logind[1304]: New session 22 of user core. Feb 8 23:59:38.467748 sshd[4096]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:38.470922 systemd[1]: sshd@19-10.200.8.17:22-10.200.12.6:46076.service: Deactivated successfully. Feb 8 23:59:38.471943 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:59:38.472707 systemd-logind[1304]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:59:38.473516 systemd-logind[1304]: Removed session 22. Feb 8 23:59:43.576279 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.12.6:46080.service. Feb 8 23:59:44.197366 sshd[4111]: Accepted publickey for core from 10.200.12.6 port 46080 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:44.199098 sshd[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:44.205123 systemd[1]: Started session-23.scope. Feb 8 23:59:44.206717 systemd-logind[1304]: New session 23 of user core. Feb 8 23:59:44.691427 sshd[4111]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:44.694742 systemd[1]: sshd@20-10.200.8.17:22-10.200.12.6:46080.service: Deactivated successfully. Feb 8 23:59:44.695509 systemd-logind[1304]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:59:44.695782 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:59:44.696716 systemd-logind[1304]: Removed session 23. Feb 8 23:59:44.795603 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.12.6:46084.service. Feb 8 23:59:45.414164 sshd[4123]: Accepted publickey for core from 10.200.12.6 port 46084 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:45.415969 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:45.421539 systemd-logind[1304]: New session 24 of user core. Feb 8 23:59:45.422172 systemd[1]: Started session-24.scope. Feb 8 23:59:47.058478 env[1316]: time="2024-02-08T23:59:47.058412150Z" level=info msg="StopContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" with timeout 30 (s)" Feb 8 23:59:47.061777 env[1316]: time="2024-02-08T23:59:47.059505662Z" level=info msg="Stop container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" with signal terminated" Feb 8 23:59:47.070172 systemd[1]: run-containerd-runc-k8s.io-e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec-runc.TfOn4G.mount: Deactivated successfully. Feb 8 23:59:47.090616 systemd[1]: cri-containerd-d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1.scope: Deactivated successfully. Feb 8 23:59:47.099437 env[1316]: time="2024-02-08T23:59:47.099357488Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:59:47.107986 env[1316]: time="2024-02-08T23:59:47.107931180Z" level=info msg="StopContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" with timeout 1 (s)" Feb 8 23:59:47.108274 env[1316]: time="2024-02-08T23:59:47.108246483Z" level=info msg="Stop container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" with signal terminated" Feb 8 23:59:47.118060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1-rootfs.mount: Deactivated successfully. Feb 8 23:59:47.123757 systemd-networkd[1462]: lxc_health: Link DOWN Feb 8 23:59:47.123764 systemd-networkd[1462]: lxc_health: Lost carrier Feb 8 23:59:47.139117 env[1316]: time="2024-02-08T23:59:47.139080114Z" level=info msg="shim disconnected" id=d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1 Feb 8 23:59:47.139288 env[1316]: time="2024-02-08T23:59:47.139273416Z" level=warning msg="cleaning up after shim disconnected" id=d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1 namespace=k8s.io Feb 8 23:59:47.139359 env[1316]: time="2024-02-08T23:59:47.139349616Z" level=info msg="cleaning up dead shim" Feb 8 23:59:47.152906 env[1316]: time="2024-02-08T23:59:47.152877161Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4178 runtime=io.containerd.runc.v2\n" Feb 8 23:59:47.154833 systemd[1]: cri-containerd-e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec.scope: Deactivated successfully. Feb 8 23:59:47.155060 systemd[1]: cri-containerd-e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec.scope: Consumed 7.342s CPU time. Feb 8 23:59:47.160718 env[1316]: time="2024-02-08T23:59:47.160687445Z" level=info msg="StopContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" returns successfully" Feb 8 23:59:47.162819 env[1316]: time="2024-02-08T23:59:47.162791067Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:59:47.163103 env[1316]: time="2024-02-08T23:59:47.163076071Z" level=info msg="Container to stop \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.165463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6-shm.mount: Deactivated successfully. Feb 8 23:59:47.176809 systemd[1]: cri-containerd-09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6.scope: Deactivated successfully. Feb 8 23:59:47.183317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec-rootfs.mount: Deactivated successfully. Feb 8 23:59:47.196030 env[1316]: time="2024-02-08T23:59:47.195356516Z" level=info msg="shim disconnected" id=e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec Feb 8 23:59:47.196030 env[1316]: time="2024-02-08T23:59:47.195418717Z" level=warning msg="cleaning up after shim disconnected" id=e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec namespace=k8s.io Feb 8 23:59:47.196030 env[1316]: time="2024-02-08T23:59:47.195430817Z" level=info msg="cleaning up dead shim" Feb 8 23:59:47.210554 env[1316]: time="2024-02-08T23:59:47.210514578Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4224 runtime=io.containerd.runc.v2\n" Feb 8 23:59:47.216845 env[1316]: time="2024-02-08T23:59:47.216811846Z" level=info msg="StopContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" returns successfully" Feb 8 23:59:47.217788 env[1316]: time="2024-02-08T23:59:47.217645955Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:59:47.217788 env[1316]: time="2024-02-08T23:59:47.217718856Z" level=info msg="Container to stop \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.217788 env[1316]: time="2024-02-08T23:59:47.217740156Z" level=info msg="Container to stop \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.217788 env[1316]: time="2024-02-08T23:59:47.217757156Z" level=info msg="Container to stop \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.217788 env[1316]: time="2024-02-08T23:59:47.217772156Z" level=info msg="Container to stop \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.218024 env[1316]: time="2024-02-08T23:59:47.217787756Z" level=info msg="Container to stop \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:47.218024 env[1316]: time="2024-02-08T23:59:47.218001359Z" level=info msg="shim disconnected" id=09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6 Feb 8 23:59:47.218109 env[1316]: time="2024-02-08T23:59:47.218037159Z" level=warning msg="cleaning up after shim disconnected" id=09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6 namespace=k8s.io Feb 8 23:59:47.218109 env[1316]: time="2024-02-08T23:59:47.218049359Z" level=info msg="cleaning up dead shim" Feb 8 23:59:47.227624 systemd[1]: cri-containerd-9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2.scope: Deactivated successfully. Feb 8 23:59:47.228700 env[1316]: time="2024-02-08T23:59:47.227623862Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4237 runtime=io.containerd.runc.v2\n" Feb 8 23:59:47.228700 env[1316]: time="2024-02-08T23:59:47.227924265Z" level=info msg="TearDown network for sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" successfully" Feb 8 23:59:47.228700 env[1316]: time="2024-02-08T23:59:47.227949365Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" returns successfully" Feb 8 23:59:47.260732 env[1316]: time="2024-02-08T23:59:47.260685116Z" level=info msg="shim disconnected" id=9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2 Feb 8 23:59:47.260979 env[1316]: time="2024-02-08T23:59:47.260948218Z" level=warning msg="cleaning up after shim disconnected" id=9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2 namespace=k8s.io Feb 8 23:59:47.260979 env[1316]: time="2024-02-08T23:59:47.260976519Z" level=info msg="cleaning up dead shim" Feb 8 23:59:47.269198 env[1316]: time="2024-02-08T23:59:47.269165806Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4269 runtime=io.containerd.runc.v2\n" Feb 8 23:59:47.269475 env[1316]: time="2024-02-08T23:59:47.269426909Z" level=info msg="TearDown network for sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" successfully" Feb 8 23:59:47.269564 env[1316]: time="2024-02-08T23:59:47.269491110Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" returns successfully" Feb 8 23:59:47.326800 kubelet[2425]: I0208 23:59:47.324765 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc64t\" (UniqueName: \"kubernetes.io/projected/0bb15dd4-9cda-47d8-a81f-0eed42b04021-kube-api-access-hc64t\") pod \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\" (UID: \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\") " Feb 8 23:59:47.326800 kubelet[2425]: I0208 23:59:47.324875 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bb15dd4-9cda-47d8-a81f-0eed42b04021-cilium-config-path\") pod \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\" (UID: \"0bb15dd4-9cda-47d8-a81f-0eed42b04021\") " Feb 8 23:59:47.326800 kubelet[2425]: W0208 23:59:47.325367 2425 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0bb15dd4-9cda-47d8-a81f-0eed42b04021/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:59:47.328955 kubelet[2425]: I0208 23:59:47.328922 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bb15dd4-9cda-47d8-a81f-0eed42b04021-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0bb15dd4-9cda-47d8-a81f-0eed42b04021" (UID: "0bb15dd4-9cda-47d8-a81f-0eed42b04021"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:59:47.330836 kubelet[2425]: I0208 23:59:47.330800 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb15dd4-9cda-47d8-a81f-0eed42b04021-kube-api-access-hc64t" (OuterVolumeSpecName: "kube-api-access-hc64t") pod "0bb15dd4-9cda-47d8-a81f-0eed42b04021" (UID: "0bb15dd4-9cda-47d8-a81f-0eed42b04021"). InnerVolumeSpecName "kube-api-access-hc64t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:59:47.425252 kubelet[2425]: I0208 23:59:47.425203 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.425617 kubelet[2425]: I0208 23:59:47.425594 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-lib-modules\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.425782 kubelet[2425]: I0208 23:59:47.425765 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-run\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.425925 kubelet[2425]: I0208 23:59:47.425912 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hostproc\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426044 kubelet[2425]: I0208 23:59:47.426031 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-xtables-lock\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426133 kubelet[2425]: I0208 23:59:47.426104 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.426210 kubelet[2425]: I0208 23:59:47.426187 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.426274 kubelet[2425]: I0208 23:59:47.426227 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hostproc" (OuterVolumeSpecName: "hostproc") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.426355 kubelet[2425]: I0208 23:59:47.426335 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.426481 kubelet[2425]: I0208 23:59:47.426465 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k52tm\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-kube-api-access-k52tm\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426615 kubelet[2425]: I0208 23:59:47.426601 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-kernel\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426739 kubelet[2425]: I0208 23:59:47.426725 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-cgroup\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426861 kubelet[2425]: I0208 23:59:47.426849 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.426989 kubelet[2425]: I0208 23:59:47.426975 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-etc-cni-netd\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.427117 kubelet[2425]: I0208 23:59:47.427103 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-net\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.427243 kubelet[2425]: I0208 23:59:47.427229 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-bpf-maps\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.427437 kubelet[2425]: I0208 23:59:47.427419 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.428186 kubelet[2425]: I0208 23:59:47.428164 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cni-path\") pod \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\" (UID: \"9fdc9f98-e24b-4dd4-8931-d49b429f16cf\") " Feb 8 23:59:47.431587 kubelet[2425]: I0208 23:59:47.431567 2425 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-lib-modules\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.431758 kubelet[2425]: I0208 23:59:47.431743 2425 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-hc64t\" (UniqueName: \"kubernetes.io/projected/0bb15dd4-9cda-47d8-a81f-0eed42b04021-kube-api-access-hc64t\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.431875 kubelet[2425]: I0208 23:59:47.431864 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0bb15dd4-9cda-47d8-a81f-0eed42b04021-cilium-config-path\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.432196 kubelet[2425]: I0208 23:59:47.432179 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-run\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.432348 kubelet[2425]: I0208 23:59:47.432336 2425 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hostproc\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.432469 kubelet[2425]: I0208 23:59:47.431487 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cni-path" (OuterVolumeSpecName: "cni-path") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.432568 kubelet[2425]: I0208 23:59:47.432058 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-kube-api-access-k52tm" (OuterVolumeSpecName: "kube-api-access-k52tm") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "kube-api-access-k52tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:59:47.432653 kubelet[2425]: I0208 23:59:47.427794 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.432737 kubelet[2425]: I0208 23:59:47.427816 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.432819 kubelet[2425]: I0208 23:59:47.428057 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.432908 kubelet[2425]: I0208 23:59:47.428079 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.432982 kubelet[2425]: I0208 23:59:47.428103 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:47.433054 kubelet[2425]: W0208 23:59:47.427747 2425 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9fdc9f98-e24b-4dd4-8931-d49b429f16cf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:59:47.434353 kubelet[2425]: I0208 23:59:47.432124 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:59:47.434441 kubelet[2425]: I0208 23:59:47.434422 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:59:47.434625 kubelet[2425]: I0208 23:59:47.434612 2425 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-xtables-lock\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.435548 kubelet[2425]: I0208 23:59:47.435519 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9fdc9f98-e24b-4dd4-8931-d49b429f16cf" (UID: "9fdc9f98-e24b-4dd4-8931-d49b429f16cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:59:47.498876 kubelet[2425]: I0208 23:59:47.498848 2425 scope.go:115] "RemoveContainer" containerID="d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1" Feb 8 23:59:47.501512 env[1316]: time="2024-02-08T23:59:47.501067189Z" level=info msg="RemoveContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\"" Feb 8 23:59:47.503996 systemd[1]: Removed slice kubepods-besteffort-pod0bb15dd4_9cda_47d8_a81f_0eed42b04021.slice. Feb 8 23:59:47.511482 env[1316]: time="2024-02-08T23:59:47.511431000Z" level=info msg="RemoveContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" returns successfully" Feb 8 23:59:47.514988 kubelet[2425]: I0208 23:59:47.514967 2425 scope.go:115] "RemoveContainer" containerID="d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1" Feb 8 23:59:47.515679 env[1316]: time="2024-02-08T23:59:47.515517944Z" level=error msg="ContainerStatus for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": not found" Feb 8 23:59:47.515911 kubelet[2425]: E0208 23:59:47.515897 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": not found" containerID="d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1" Feb 8 23:59:47.516069 kubelet[2425]: I0208 23:59:47.516057 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1} err="failed to get container status \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": not found" Feb 8 23:59:47.516225 kubelet[2425]: I0208 23:59:47.516192 2425 scope.go:115] "RemoveContainer" containerID="e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec" Feb 8 23:59:47.516896 systemd[1]: Removed slice kubepods-burstable-pod9fdc9f98_e24b_4dd4_8931_d49b429f16cf.slice. Feb 8 23:59:47.517028 systemd[1]: kubepods-burstable-pod9fdc9f98_e24b_4dd4_8931_d49b429f16cf.slice: Consumed 7.440s CPU time. Feb 8 23:59:47.520997 env[1316]: time="2024-02-08T23:59:47.520743600Z" level=info msg="RemoveContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\"" Feb 8 23:59:47.528565 env[1316]: time="2024-02-08T23:59:47.528438082Z" level=info msg="RemoveContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" returns successfully" Feb 8 23:59:47.528823 kubelet[2425]: I0208 23:59:47.528807 2425 scope.go:115] "RemoveContainer" containerID="48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0" Feb 8 23:59:47.531291 env[1316]: time="2024-02-08T23:59:47.531046110Z" level=info msg="RemoveContainer for \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535168 2425 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-etc-cni-netd\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535217 2425 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-net\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535235 2425 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-bpf-maps\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535252 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-config-path\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535267 2425 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cni-path\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535294 2425 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-clustermesh-secrets\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535310 2425 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-k52tm\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-kube-api-access-k52tm\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535471 kubelet[2425]: I0208 23:59:47.535327 2425 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535739 kubelet[2425]: I0208 23:59:47.535359 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-cilium-cgroup\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.535739 kubelet[2425]: I0208 23:59:47.535374 2425 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9fdc9f98-e24b-4dd4-8931-d49b429f16cf-hubble-tls\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:47.540244 env[1316]: time="2024-02-08T23:59:47.540208908Z" level=info msg="RemoveContainer for \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\" returns successfully" Feb 8 23:59:47.540513 kubelet[2425]: I0208 23:59:47.540496 2425 scope.go:115] "RemoveContainer" containerID="f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f" Feb 8 23:59:47.541771 env[1316]: time="2024-02-08T23:59:47.541744325Z" level=info msg="RemoveContainer for \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\"" Feb 8 23:59:47.552868 env[1316]: time="2024-02-08T23:59:47.552837443Z" level=info msg="RemoveContainer for \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\" returns successfully" Feb 8 23:59:47.553040 kubelet[2425]: I0208 23:59:47.552982 2425 scope.go:115] "RemoveContainer" containerID="16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf" Feb 8 23:59:47.554122 env[1316]: time="2024-02-08T23:59:47.554093557Z" level=info msg="RemoveContainer for \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\"" Feb 8 23:59:47.563349 env[1316]: time="2024-02-08T23:59:47.563311156Z" level=info msg="RemoveContainer for \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\" returns successfully" Feb 8 23:59:47.563524 kubelet[2425]: I0208 23:59:47.563487 2425 scope.go:115] "RemoveContainer" containerID="ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543" Feb 8 23:59:47.564611 env[1316]: time="2024-02-08T23:59:47.564586869Z" level=info msg="RemoveContainer for \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\"" Feb 8 23:59:47.574206 env[1316]: time="2024-02-08T23:59:47.574169472Z" level=info msg="RemoveContainer for \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\" returns successfully" Feb 8 23:59:47.574356 kubelet[2425]: I0208 23:59:47.574334 2425 scope.go:115] "RemoveContainer" containerID="e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec" Feb 8 23:59:47.574657 env[1316]: time="2024-02-08T23:59:47.574587776Z" level=error msg="ContainerStatus for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": not found" Feb 8 23:59:47.574774 kubelet[2425]: E0208 23:59:47.574752 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": not found" containerID="e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec" Feb 8 23:59:47.574853 kubelet[2425]: I0208 23:59:47.574796 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec} err="failed to get container status \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": not found" Feb 8 23:59:47.574853 kubelet[2425]: I0208 23:59:47.574811 2425 scope.go:115] "RemoveContainer" containerID="48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0" Feb 8 23:59:47.575029 env[1316]: time="2024-02-08T23:59:47.574979381Z" level=error msg="ContainerStatus for \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\": not found" Feb 8 23:59:47.575137 kubelet[2425]: E0208 23:59:47.575119 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\": not found" containerID="48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0" Feb 8 23:59:47.575206 kubelet[2425]: I0208 23:59:47.575155 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0} err="failed to get container status \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"48f5b77aa0db5bcad2f5ea601d18ee7318feded9bb80f5a3ed848b4cd524a6b0\": not found" Feb 8 23:59:47.575206 kubelet[2425]: I0208 23:59:47.575173 2425 scope.go:115] "RemoveContainer" containerID="f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f" Feb 8 23:59:47.575387 env[1316]: time="2024-02-08T23:59:47.575340984Z" level=error msg="ContainerStatus for \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\": not found" Feb 8 23:59:47.575503 kubelet[2425]: E0208 23:59:47.575486 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\": not found" containerID="f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f" Feb 8 23:59:47.575580 kubelet[2425]: I0208 23:59:47.575518 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f} err="failed to get container status \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f23632fed9014d251adee04ed9614ab2b07b6dd271206cf3e9c1ebfb81d7b24f\": not found" Feb 8 23:59:47.575580 kubelet[2425]: I0208 23:59:47.575532 2425 scope.go:115] "RemoveContainer" containerID="16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf" Feb 8 23:59:47.575754 env[1316]: time="2024-02-08T23:59:47.575696688Z" level=error msg="ContainerStatus for \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\": not found" Feb 8 23:59:47.575867 kubelet[2425]: E0208 23:59:47.575847 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\": not found" containerID="16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf" Feb 8 23:59:47.575936 kubelet[2425]: I0208 23:59:47.575879 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf} err="failed to get container status \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\": rpc error: code = NotFound desc = an error occurred when try to find container \"16b106232504c2023dcbf34a525741f5044595eee3c92c85b30c0ac0d7553caf\": not found" Feb 8 23:59:47.575936 kubelet[2425]: I0208 23:59:47.575893 2425 scope.go:115] "RemoveContainer" containerID="ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543" Feb 8 23:59:47.576107 env[1316]: time="2024-02-08T23:59:47.576062592Z" level=error msg="ContainerStatus for \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\": not found" Feb 8 23:59:47.576208 kubelet[2425]: E0208 23:59:47.576190 2425 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\": not found" containerID="ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543" Feb 8 23:59:47.576279 kubelet[2425]: I0208 23:59:47.576221 2425 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543} err="failed to get container status \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d10cce05cdd7ae4b06a6d466aef5699dca117b48a6a43ed7801d4fa440543\": not found" Feb 8 23:59:48.066463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2-rootfs.mount: Deactivated successfully. Feb 8 23:59:48.066614 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2-shm.mount: Deactivated successfully. Feb 8 23:59:48.066711 systemd[1]: var-lib-kubelet-pods-9fdc9f98\x2de24b\x2d4dd4\x2d8931\x2dd49b429f16cf-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:59:48.066806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6-rootfs.mount: Deactivated successfully. Feb 8 23:59:48.066906 systemd[1]: var-lib-kubelet-pods-9fdc9f98\x2de24b\x2d4dd4\x2d8931\x2dd49b429f16cf-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:59:48.067013 systemd[1]: var-lib-kubelet-pods-0bb15dd4\x2d9cda\x2d47d8\x2da81f\x2d0eed42b04021-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhc64t.mount: Deactivated successfully. Feb 8 23:59:48.067114 systemd[1]: var-lib-kubelet-pods-9fdc9f98\x2de24b\x2d4dd4\x2d8931\x2dd49b429f16cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk52tm.mount: Deactivated successfully. Feb 8 23:59:48.961394 env[1316]: time="2024-02-08T23:59:48.961336076Z" level=info msg="StopContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" with timeout 1 (s)" Feb 8 23:59:48.962317 env[1316]: time="2024-02-08T23:59:48.962246586Z" level=error msg="StopContainer for \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": not found" Feb 8 23:59:48.962969 env[1316]: time="2024-02-08T23:59:48.962205486Z" level=info msg="StopContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" with timeout 1 (s)" Feb 8 23:59:48.964666 kubelet[2425]: I0208 23:59:48.963561 2425 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0bb15dd4-9cda-47d8-a81f-0eed42b04021 path="/var/lib/kubelet/pods/0bb15dd4-9cda-47d8-a81f-0eed42b04021/volumes" Feb 8 23:59:48.964666 kubelet[2425]: E0208 23:59:48.963875 2425 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": not found" containerID="e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec" Feb 8 23:59:48.964666 kubelet[2425]: E0208 23:59:48.964002 2425 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1\": not found" containerID="d876abaeea8d678e3ee897e0b4e7dc317c0362d39ee60d082abc734e359e53f1" Feb 8 23:59:48.964666 kubelet[2425]: I0208 23:59:48.964105 2425 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9fdc9f98-e24b-4dd4-8931-d49b429f16cf path="/var/lib/kubelet/pods/9fdc9f98-e24b-4dd4-8931-d49b429f16cf/volumes" Feb 8 23:59:48.965113 env[1316]: time="2024-02-08T23:59:48.963543300Z" level=error msg="StopContainer for \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9a46a0a42b4ea78cb1497a8d76d15212940b8628b6b8bc50f5f31d5edf278ec\": not found" Feb 8 23:59:48.965327 env[1316]: time="2024-02-08T23:59:48.965298118Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:59:48.965567 env[1316]: time="2024-02-08T23:59:48.965512321Z" level=info msg="TearDown network for sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" successfully" Feb 8 23:59:48.965735 env[1316]: time="2024-02-08T23:59:48.965713123Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" returns successfully" Feb 8 23:59:48.965899 env[1316]: time="2024-02-08T23:59:48.965712923Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:59:48.965974 env[1316]: time="2024-02-08T23:59:48.965908225Z" level=info msg="TearDown network for sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" successfully" Feb 8 23:59:48.965974 env[1316]: time="2024-02-08T23:59:48.965947725Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" returns successfully" Feb 8 23:59:49.036417 kubelet[2425]: E0208 23:59:49.036351 2425 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:59:49.094835 sshd[4123]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:49.098177 systemd[1]: sshd@21-10.200.8.17:22-10.200.12.6:46084.service: Deactivated successfully. Feb 8 23:59:49.099111 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:59:49.099874 systemd-logind[1304]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:59:49.101042 systemd-logind[1304]: Removed session 24. Feb 8 23:59:49.202368 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.12.6:48608.service. Feb 8 23:59:49.825982 sshd[4294]: Accepted publickey for core from 10.200.12.6 port 48608 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:49.827741 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:49.833850 systemd[1]: Started session-25.scope. Feb 8 23:59:49.834603 systemd-logind[1304]: New session 25 of user core. Feb 8 23:59:50.531026 kubelet[2425]: I0208 23:59:50.530980 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531068 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="mount-cgroup" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531084 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="clean-cilium-state" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531093 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="cilium-agent" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531101 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0bb15dd4-9cda-47d8-a81f-0eed42b04021" containerName="cilium-operator" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531122 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="apply-sysctl-overwrites" Feb 8 23:59:50.531517 kubelet[2425]: E0208 23:59:50.531131 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="mount-bpf-fs" Feb 8 23:59:50.531517 kubelet[2425]: I0208 23:59:50.531177 2425 memory_manager.go:346] "RemoveStaleState removing state" podUID="9fdc9f98-e24b-4dd4-8931-d49b429f16cf" containerName="cilium-agent" Feb 8 23:59:50.531517 kubelet[2425]: I0208 23:59:50.531201 2425 memory_manager.go:346] "RemoveStaleState removing state" podUID="0bb15dd4-9cda-47d8-a81f-0eed42b04021" containerName="cilium-operator" Feb 8 23:59:50.538946 systemd[1]: Created slice kubepods-burstable-pod98870384_7aea_44e6_9d5f_6a617a76c122.slice. Feb 8 23:59:50.547703 kubelet[2425]: W0208 23:59:50.547679 2425 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.547841 kubelet[2425]: E0208 23:59:50.547715 2425 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.547901 kubelet[2425]: W0208 23:59:50.547885 2425 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.547945 kubelet[2425]: E0208 23:59:50.547906 2425 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.548049 kubelet[2425]: W0208 23:59:50.548032 2425 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.548132 kubelet[2425]: E0208 23:59:50.548057 2425 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.548231 kubelet[2425]: W0208 23:59:50.548216 2425 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.548306 kubelet[2425]: E0208 23:59:50.548248 2425 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-b1d3c6d57d" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-b1d3c6d57d' and this object Feb 8 23:59:50.637734 sshd[4294]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:50.641443 systemd[1]: sshd@22-10.200.8.17:22-10.200.12.6:48608.service: Deactivated successfully. Feb 8 23:59:50.642510 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:59:50.643172 systemd-logind[1304]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:59:50.644141 systemd-logind[1304]: Removed session 25. Feb 8 23:59:50.655374 kubelet[2425]: I0208 23:59:50.655342 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-cgroup\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655499 kubelet[2425]: I0208 23:59:50.655389 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-xtables-lock\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655499 kubelet[2425]: I0208 23:59:50.655415 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-hubble-tls\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655499 kubelet[2425]: I0208 23:59:50.655443 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cni-path\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655499 kubelet[2425]: I0208 23:59:50.655488 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-clustermesh-secrets\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655671 kubelet[2425]: I0208 23:59:50.655516 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655671 kubelet[2425]: I0208 23:59:50.655552 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-etc-cni-netd\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655671 kubelet[2425]: I0208 23:59:50.655584 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-kernel\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655671 kubelet[2425]: I0208 23:59:50.655639 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6q5z\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-kube-api-access-m6q5z\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655671 kubelet[2425]: I0208 23:59:50.655669 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-lib-modules\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655873 kubelet[2425]: I0208 23:59:50.655697 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655873 kubelet[2425]: I0208 23:59:50.655730 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-hostproc\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655873 kubelet[2425]: I0208 23:59:50.655761 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-run\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655873 kubelet[2425]: I0208 23:59:50.655790 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-bpf-maps\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.655873 kubelet[2425]: I0208 23:59:50.655824 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-net\") pod \"cilium-jtggv\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " pod="kube-system/cilium-jtggv" Feb 8 23:59:50.744714 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.12.6:48620.service. Feb 8 23:59:51.368942 sshd[4304]: Accepted publickey for core from 10.200.12.6 port 48620 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:51.370706 sshd[4304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:51.376932 systemd-logind[1304]: New session 26 of user core. Feb 8 23:59:51.377436 systemd[1]: Started session-26.scope. Feb 8 23:59:51.757414 kubelet[2425]: E0208 23:59:51.757289 2425 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:59:51.757414 kubelet[2425]: E0208 23:59:51.757406 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path podName:98870384-7aea-44e6-9d5f-6a617a76c122 nodeName:}" failed. No retries permitted until 2024-02-08 23:59:52.257377521 +0000 UTC m=+233.479031272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path") pod "cilium-jtggv" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:59:51.757932 kubelet[2425]: E0208 23:59:51.757730 2425 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 8 23:59:51.757932 kubelet[2425]: E0208 23:59:51.757794 2425 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets podName:98870384-7aea-44e6-9d5f-6a617a76c122 nodeName:}" failed. No retries permitted until 2024-02-08 23:59:52.257770425 +0000 UTC m=+233.479424176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets") pod "cilium-jtggv" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122") : failed to sync secret cache: timed out waiting for the condition Feb 8 23:59:51.882141 sshd[4304]: pam_unix(sshd:session): session closed for user core Feb 8 23:59:51.886376 systemd-logind[1304]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:59:51.886832 systemd[1]: sshd@23-10.200.8.17:22-10.200.12.6:48620.service: Deactivated successfully. Feb 8 23:59:51.887751 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:59:51.888931 systemd-logind[1304]: Removed session 26. Feb 8 23:59:51.989626 systemd[1]: Started sshd@24-10.200.8.17:22-10.200.12.6:48636.service. Feb 8 23:59:52.345698 env[1316]: time="2024-02-08T23:59:52.345647290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jtggv,Uid:98870384-7aea-44e6-9d5f-6a617a76c122,Namespace:kube-system,Attempt:0,}" Feb 8 23:59:52.396292 env[1316]: time="2024-02-08T23:59:52.396223019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:59:52.396502 env[1316]: time="2024-02-08T23:59:52.396261019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:59:52.396502 env[1316]: time="2024-02-08T23:59:52.396274419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:59:52.396502 env[1316]: time="2024-02-08T23:59:52.396398721Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422 pid=4327 runtime=io.containerd.runc.v2 Feb 8 23:59:52.415650 systemd[1]: Started cri-containerd-287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422.scope. Feb 8 23:59:52.445557 env[1316]: time="2024-02-08T23:59:52.445506035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jtggv,Uid:98870384-7aea-44e6-9d5f-6a617a76c122,Namespace:kube-system,Attempt:0,} returns sandbox id \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\"" Feb 8 23:59:52.449429 env[1316]: time="2024-02-08T23:59:52.448056261Z" level=info msg="CreateContainer within sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:59:52.485782 env[1316]: time="2024-02-08T23:59:52.485739556Z" level=info msg="CreateContainer within sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\"" Feb 8 23:59:52.488173 env[1316]: time="2024-02-08T23:59:52.486496864Z" level=info msg="StartContainer for \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\"" Feb 8 23:59:52.505040 systemd[1]: Started cri-containerd-400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6.scope. Feb 8 23:59:52.517200 systemd[1]: cri-containerd-400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6.scope: Deactivated successfully. Feb 8 23:59:52.583589 env[1316]: time="2024-02-08T23:59:52.583508079Z" level=info msg="shim disconnected" id=400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6 Feb 8 23:59:52.583589 env[1316]: time="2024-02-08T23:59:52.583588680Z" level=warning msg="cleaning up after shim disconnected" id=400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6 namespace=k8s.io Feb 8 23:59:52.583922 env[1316]: time="2024-02-08T23:59:52.583601380Z" level=info msg="cleaning up dead shim" Feb 8 23:59:52.592723 env[1316]: time="2024-02-08T23:59:52.592671175Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4386 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:59:52Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:59:52.593082 env[1316]: time="2024-02-08T23:59:52.592966878Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 8 23:59:52.594310 env[1316]: time="2024-02-08T23:59:52.594265192Z" level=error msg="Failed to pipe stderr of container \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\"" error="reading from a closed fifo" Feb 8 23:59:52.594414 env[1316]: time="2024-02-08T23:59:52.594357893Z" level=error msg="Failed to pipe stdout of container \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\"" error="reading from a closed fifo" Feb 8 23:59:52.600206 env[1316]: time="2024-02-08T23:59:52.598848740Z" level=error msg="StartContainer for \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:59:52.601512 kubelet[2425]: E0208 23:59:52.599368 2425 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6" Feb 8 23:59:52.601512 kubelet[2425]: E0208 23:59:52.599974 2425 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:59:52.601512 kubelet[2425]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:59:52.601512 kubelet[2425]: rm /hostbin/cilium-mount Feb 8 23:59:52.601786 kubelet[2425]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-m6q5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-jtggv_kube-system(98870384-7aea-44e6-9d5f-6a617a76c122): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:59:52.601931 kubelet[2425]: E0208 23:59:52.600964 2425 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jtggv" podUID=98870384-7aea-44e6-9d5f-6a617a76c122 Feb 8 23:59:52.618736 sshd[4318]: Accepted publickey for core from 10.200.12.6 port 48636 ssh2: RSA SHA256:bgxHmJM37JVrLJuGSWjL4vRG7UYDV2sE2SVK2HyWFow Feb 8 23:59:52.620212 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:59:52.625281 systemd[1]: Started session-27.scope. Feb 8 23:59:52.625772 systemd-logind[1304]: New session 27 of user core. Feb 8 23:59:53.275663 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3804294207.mount: Deactivated successfully. Feb 8 23:59:53.461059 kubelet[2425]: I0208 23:59:53.461022 2425 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-b1d3c6d57d" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:59:53.460952043 +0000 UTC m=+234.682605794 LastTransitionTime:2024-02-08 23:59:53.460952043 +0000 UTC m=+234.682605794 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:59:53.529467 env[1316]: time="2024-02-08T23:59:53.529040852Z" level=info msg="StopPodSandbox for \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\"" Feb 8 23:59:53.529467 env[1316]: time="2024-02-08T23:59:53.529120153Z" level=info msg="Container to stop \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:59:53.535047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422-shm.mount: Deactivated successfully. Feb 8 23:59:53.548407 systemd[1]: cri-containerd-287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422.scope: Deactivated successfully. Feb 8 23:59:53.570281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422-rootfs.mount: Deactivated successfully. Feb 8 23:59:53.589597 env[1316]: time="2024-02-08T23:59:53.589550283Z" level=info msg="shim disconnected" id=287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422 Feb 8 23:59:53.589776 env[1316]: time="2024-02-08T23:59:53.589597584Z" level=warning msg="cleaning up after shim disconnected" id=287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422 namespace=k8s.io Feb 8 23:59:53.589776 env[1316]: time="2024-02-08T23:59:53.589611584Z" level=info msg="cleaning up dead shim" Feb 8 23:59:53.598565 env[1316]: time="2024-02-08T23:59:53.598529777Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4423 runtime=io.containerd.runc.v2\n" Feb 8 23:59:53.598876 env[1316]: time="2024-02-08T23:59:53.598845280Z" level=info msg="TearDown network for sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" successfully" Feb 8 23:59:53.598957 env[1316]: time="2024-02-08T23:59:53.598875380Z" level=info msg="StopPodSandbox for \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" returns successfully" Feb 8 23:59:53.776719 kubelet[2425]: I0208 23:59:53.776221 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.777215 kubelet[2425]: W0208 23:59:53.776609 2425 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/98870384-7aea-44e6-9d5f-6a617a76c122/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:59:53.777215 kubelet[2425]: I0208 23:59:53.777060 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-clustermesh-secrets\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.777215 kubelet[2425]: I0208 23:59:53.777114 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cni-path\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.777641 kubelet[2425]: I0208 23:59:53.777511 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-kernel\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.777641 kubelet[2425]: I0208 23:59:53.777580 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-hubble-tls\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.777641 kubelet[2425]: I0208 23:59:53.777616 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-etc-cni-netd\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778035 kubelet[2425]: I0208 23:59:53.777921 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778035 kubelet[2425]: I0208 23:59:53.777986 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6q5z\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-kube-api-access-m6q5z\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778292 kubelet[2425]: I0208 23:59:53.778214 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-bpf-maps\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778292 kubelet[2425]: I0208 23:59:53.778262 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-hostproc\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778510 kubelet[2425]: I0208 23:59:53.778495 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-xtables-lock\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778664 kubelet[2425]: I0208 23:59:53.778651 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-lib-modules\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778798 kubelet[2425]: I0208 23:59:53.778786 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-run\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.778926 kubelet[2425]: I0208 23:59:53.778915 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-net\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.779061 kubelet[2425]: I0208 23:59:53.779050 2425 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-cgroup\") pod \"98870384-7aea-44e6-9d5f-6a617a76c122\" (UID: \"98870384-7aea-44e6-9d5f-6a617a76c122\") " Feb 8 23:59:53.779237 kubelet[2425]: I0208 23:59:53.779202 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.779373 kubelet[2425]: I0208 23:59:53.779356 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cni-path" (OuterVolumeSpecName: "cni-path") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.779567 kubelet[2425]: I0208 23:59:53.779500 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.785308 systemd[1]: var-lib-kubelet-pods-98870384\x2d7aea\x2d44e6\x2d9d5f\x2d6a617a76c122-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:59:53.786615 kubelet[2425]: I0208 23:59:53.779750 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-hostproc" (OuterVolumeSpecName: "hostproc") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.786732 kubelet[2425]: I0208 23:59:53.779922 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.786812 kubelet[2425]: I0208 23:59:53.779945 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.786893 kubelet[2425]: I0208 23:59:53.780017 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.786983 kubelet[2425]: I0208 23:59:53.780034 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.787338 kubelet[2425]: I0208 23:59:53.787317 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.789188 kubelet[2425]: I0208 23:59:53.789154 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:59:53.790717 kubelet[2425]: I0208 23:59:53.789371 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:59:53.790853 kubelet[2425]: I0208 23:59:53.790659 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:59:53.794039 systemd[1]: var-lib-kubelet-pods-98870384\x2d7aea\x2d44e6\x2d9d5f\x2d6a617a76c122-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:59:53.795212 kubelet[2425]: I0208 23:59:53.795178 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:59:53.799615 kubelet[2425]: I0208 23:59:53.799573 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-kube-api-access-m6q5z" (OuterVolumeSpecName: "kube-api-access-m6q5z") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "kube-api-access-m6q5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:59:53.803646 kubelet[2425]: I0208 23:59:53.803623 2425 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "98870384-7aea-44e6-9d5f-6a617a76c122" (UID: "98870384-7aea-44e6-9d5f-6a617a76c122"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:59:53.880217 kubelet[2425]: I0208 23:59:53.880170 2425 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-hostproc\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880217 kubelet[2425]: I0208 23:59:53.880219 2425 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-net\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880217 kubelet[2425]: I0208 23:59:53.880233 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-cgroup\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880246 2425 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-xtables-lock\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880259 2425 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-lib-modules\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880274 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-run\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880286 2425 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-clustermesh-secrets\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880300 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-config-path\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880312 2425 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-cni-path\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880326 2425 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-hubble-tls\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880579 kubelet[2425]: I0208 23:59:53.880341 2425 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880955 kubelet[2425]: I0208 23:59:53.880354 2425 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-etc-cni-netd\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880955 kubelet[2425]: I0208 23:59:53.880370 2425 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/98870384-7aea-44e6-9d5f-6a617a76c122-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880955 kubelet[2425]: I0208 23:59:53.880383 2425 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-m6q5z\" (UniqueName: \"kubernetes.io/projected/98870384-7aea-44e6-9d5f-6a617a76c122-kube-api-access-m6q5z\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:53.880955 kubelet[2425]: I0208 23:59:53.880396 2425 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98870384-7aea-44e6-9d5f-6a617a76c122-bpf-maps\") on node \"ci-3510.3.2-a-b1d3c6d57d\" DevicePath \"\"" Feb 8 23:59:54.037526 kubelet[2425]: E0208 23:59:54.037392 2425 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:59:54.275200 systemd[1]: var-lib-kubelet-pods-98870384\x2d7aea\x2d44e6\x2d9d5f\x2d6a617a76c122-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:59:54.275593 systemd[1]: var-lib-kubelet-pods-98870384\x2d7aea\x2d44e6\x2d9d5f\x2d6a617a76c122-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm6q5z.mount: Deactivated successfully. Feb 8 23:59:54.532659 kubelet[2425]: I0208 23:59:54.532623 2425 scope.go:115] "RemoveContainer" containerID="400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6" Feb 8 23:59:54.537970 systemd[1]: Removed slice kubepods-burstable-pod98870384_7aea_44e6_9d5f_6a617a76c122.slice. Feb 8 23:59:54.541259 env[1316]: time="2024-02-08T23:59:54.540598970Z" level=info msg="RemoveContainer for \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\"" Feb 8 23:59:54.555476 env[1316]: time="2024-02-08T23:59:54.555233222Z" level=info msg="RemoveContainer for \"400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6\" returns successfully" Feb 8 23:59:54.570564 kubelet[2425]: I0208 23:59:54.570536 2425 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:59:54.570689 kubelet[2425]: E0208 23:59:54.570617 2425 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98870384-7aea-44e6-9d5f-6a617a76c122" containerName="mount-cgroup" Feb 8 23:59:54.570689 kubelet[2425]: I0208 23:59:54.570656 2425 memory_manager.go:346] "RemoveStaleState removing state" podUID="98870384-7aea-44e6-9d5f-6a617a76c122" containerName="mount-cgroup" Feb 8 23:59:54.577433 systemd[1]: Created slice kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice. Feb 8 23:59:54.684991 kubelet[2425]: I0208 23:59:54.684942 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-host-proc-sys-kernel\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685013 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2pg\" (UniqueName: \"kubernetes.io/projected/6e45af99-8eaa-457e-ad01-38c640fc4cda-kube-api-access-nv2pg\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685051 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-hostproc\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685081 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6e45af99-8eaa-457e-ad01-38c640fc4cda-cilium-config-path\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685111 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-cilium-cgroup\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685140 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-lib-modules\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685223 kubelet[2425]: I0208 23:59:54.685174 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6e45af99-8eaa-457e-ad01-38c640fc4cda-clustermesh-secrets\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685209 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-host-proc-sys-net\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685241 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-cni-path\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685282 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-cilium-run\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685317 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-bpf-maps\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685356 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-xtables-lock\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685680 kubelet[2425]: I0208 23:59:54.685390 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6e45af99-8eaa-457e-ad01-38c640fc4cda-hubble-tls\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685997 kubelet[2425]: I0208 23:59:54.685432 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6e45af99-8eaa-457e-ad01-38c640fc4cda-cilium-ipsec-secrets\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.685997 kubelet[2425]: I0208 23:59:54.685492 2425 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6e45af99-8eaa-457e-ad01-38c640fc4cda-etc-cni-netd\") pod \"cilium-7vszq\" (UID: \"6e45af99-8eaa-457e-ad01-38c640fc4cda\") " pod="kube-system/cilium-7vszq" Feb 8 23:59:54.883824 env[1316]: time="2024-02-08T23:59:54.883762731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vszq,Uid:6e45af99-8eaa-457e-ad01-38c640fc4cda,Namespace:kube-system,Attempt:0,}" Feb 8 23:59:54.920803 env[1316]: time="2024-02-08T23:59:54.920724915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:59:54.921249 env[1316]: time="2024-02-08T23:59:54.920763815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:59:54.921249 env[1316]: time="2024-02-08T23:59:54.920777215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:59:54.921249 env[1316]: time="2024-02-08T23:59:54.920916617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819 pid=4453 runtime=io.containerd.runc.v2 Feb 8 23:59:54.939083 systemd[1]: Started cri-containerd-f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819.scope. Feb 8 23:59:54.963104 kubelet[2425]: I0208 23:59:54.963062 2425 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=98870384-7aea-44e6-9d5f-6a617a76c122 path="/var/lib/kubelet/pods/98870384-7aea-44e6-9d5f-6a617a76c122/volumes" Feb 8 23:59:54.966969 env[1316]: time="2024-02-08T23:59:54.966932394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7vszq,Uid:6e45af99-8eaa-457e-ad01-38c640fc4cda,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\"" Feb 8 23:59:54.971086 env[1316]: time="2024-02-08T23:59:54.971027137Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:59:55.011506 env[1316]: time="2024-02-08T23:59:55.011464856Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca\"" Feb 8 23:59:55.013484 env[1316]: time="2024-02-08T23:59:55.012126063Z" level=info msg="StartContainer for \"0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca\"" Feb 8 23:59:55.037089 systemd[1]: Started cri-containerd-0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca.scope. Feb 8 23:59:55.080565 env[1316]: time="2024-02-08T23:59:55.080509370Z" level=info msg="StartContainer for \"0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca\" returns successfully" Feb 8 23:59:55.091243 systemd[1]: cri-containerd-0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca.scope: Deactivated successfully. Feb 8 23:59:55.137831 env[1316]: time="2024-02-08T23:59:55.137142355Z" level=info msg="shim disconnected" id=0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca Feb 8 23:59:55.137831 env[1316]: time="2024-02-08T23:59:55.137199755Z" level=warning msg="cleaning up after shim disconnected" id=0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca namespace=k8s.io Feb 8 23:59:55.137831 env[1316]: time="2024-02-08T23:59:55.137211655Z" level=info msg="cleaning up dead shim" Feb 8 23:59:55.146057 env[1316]: time="2024-02-08T23:59:55.146015946Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4538 runtime=io.containerd.runc.v2\n" Feb 8 23:59:55.540764 env[1316]: time="2024-02-08T23:59:55.540611924Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:59:55.570342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262059425.mount: Deactivated successfully. Feb 8 23:59:55.573158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1375033855.mount: Deactivated successfully. Feb 8 23:59:55.584384 env[1316]: time="2024-02-08T23:59:55.584288875Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f\"" Feb 8 23:59:55.585249 env[1316]: time="2024-02-08T23:59:55.585218284Z" level=info msg="StartContainer for \"d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f\"" Feb 8 23:59:55.612389 systemd[1]: Started cri-containerd-d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f.scope. Feb 8 23:59:55.641992 env[1316]: time="2024-02-08T23:59:55.641931070Z" level=info msg="StartContainer for \"d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f\" returns successfully" Feb 8 23:59:55.647346 systemd[1]: cri-containerd-d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f.scope: Deactivated successfully. Feb 8 23:59:55.680279 env[1316]: time="2024-02-08T23:59:55.680220266Z" level=info msg="shim disconnected" id=d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f Feb 8 23:59:55.680279 env[1316]: time="2024-02-08T23:59:55.680281167Z" level=warning msg="cleaning up after shim disconnected" id=d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f namespace=k8s.io Feb 8 23:59:55.680599 env[1316]: time="2024-02-08T23:59:55.680293167Z" level=info msg="cleaning up dead shim" Feb 8 23:59:55.688073 env[1316]: time="2024-02-08T23:59:55.688009047Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4601 runtime=io.containerd.runc.v2\n" Feb 8 23:59:55.688787 kubelet[2425]: W0208 23:59:55.688741 2425 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98870384_7aea_44e6_9d5f_6a617a76c122.slice/cri-containerd-400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6.scope WatchSource:0}: container "400041e2c5196d16856b99d579c4587e59b6a444abe1417fab08527336e026e6" in namespace "k8s.io": not found Feb 8 23:59:56.275253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f-rootfs.mount: Deactivated successfully. Feb 8 23:59:56.549520 env[1316]: time="2024-02-08T23:59:56.547633805Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:59:56.585415 env[1316]: time="2024-02-08T23:59:56.585364693Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3\"" Feb 8 23:59:56.586070 env[1316]: time="2024-02-08T23:59:56.586037100Z" level=info msg="StartContainer for \"54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3\"" Feb 8 23:59:56.614964 systemd[1]: Started cri-containerd-54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3.scope. Feb 8 23:59:56.648305 systemd[1]: cri-containerd-54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3.scope: Deactivated successfully. Feb 8 23:59:56.652648 env[1316]: time="2024-02-08T23:59:56.652603985Z" level=info msg="StartContainer for \"54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3\" returns successfully" Feb 8 23:59:56.681225 env[1316]: time="2024-02-08T23:59:56.681173879Z" level=info msg="shim disconnected" id=54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3 Feb 8 23:59:56.681225 env[1316]: time="2024-02-08T23:59:56.681224180Z" level=warning msg="cleaning up after shim disconnected" id=54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3 namespace=k8s.io Feb 8 23:59:56.681534 env[1316]: time="2024-02-08T23:59:56.681235680Z" level=info msg="cleaning up dead shim" Feb 8 23:59:56.688550 env[1316]: time="2024-02-08T23:59:56.688516955Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4658 runtime=io.containerd.runc.v2\n" Feb 8 23:59:57.276037 systemd[1]: run-containerd-runc-k8s.io-54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3-runc.9sqYXX.mount: Deactivated successfully. Feb 8 23:59:57.276195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3-rootfs.mount: Deactivated successfully. Feb 8 23:59:57.550765 env[1316]: time="2024-02-08T23:59:57.550342699Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:59:57.594568 env[1316]: time="2024-02-08T23:59:57.594522351Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc\"" Feb 8 23:59:57.595373 env[1316]: time="2024-02-08T23:59:57.595340060Z" level=info msg="StartContainer for \"8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc\"" Feb 8 23:59:57.623135 systemd[1]: Started cri-containerd-8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc.scope. Feb 8 23:59:57.646631 systemd[1]: cri-containerd-8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc.scope: Deactivated successfully. Feb 8 23:59:57.648271 env[1316]: time="2024-02-08T23:59:57.647994899Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice/cri-containerd-8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc.scope/memory.events\": no such file or directory" Feb 8 23:59:57.653132 env[1316]: time="2024-02-08T23:59:57.653087451Z" level=info msg="StartContainer for \"8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc\" returns successfully" Feb 8 23:59:57.685774 env[1316]: time="2024-02-08T23:59:57.685719786Z" level=info msg="shim disconnected" id=8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc Feb 8 23:59:57.686007 env[1316]: time="2024-02-08T23:59:57.685778686Z" level=warning msg="cleaning up after shim disconnected" id=8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc namespace=k8s.io Feb 8 23:59:57.686007 env[1316]: time="2024-02-08T23:59:57.685792186Z" level=info msg="cleaning up dead shim" Feb 8 23:59:57.693811 env[1316]: time="2024-02-08T23:59:57.693775768Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:59:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4713 runtime=io.containerd.runc.v2\n" Feb 8 23:59:58.275565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc-rootfs.mount: Deactivated successfully. Feb 8 23:59:58.555545 env[1316]: time="2024-02-08T23:59:58.555187471Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:59:58.591356 env[1316]: time="2024-02-08T23:59:58.591304340Z" level=info msg="CreateContainer within sandbox \"f2551c2326659ad9f7b5d7f2ddad8d74ded081a1d04ee67d53f4792f61476819\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5cc6c6814f24a819bc1e99e3045ae8824155ee5f9de562762881bbd6a8658028\"" Feb 8 23:59:58.592010 env[1316]: time="2024-02-08T23:59:58.591973347Z" level=info msg="StartContainer for \"5cc6c6814f24a819bc1e99e3045ae8824155ee5f9de562762881bbd6a8658028\"" Feb 8 23:59:58.617900 systemd[1]: Started cri-containerd-5cc6c6814f24a819bc1e99e3045ae8824155ee5f9de562762881bbd6a8658028.scope. Feb 8 23:59:58.649921 env[1316]: time="2024-02-08T23:59:58.649871337Z" level=info msg="StartContainer for \"5cc6c6814f24a819bc1e99e3045ae8824155ee5f9de562762881bbd6a8658028\" returns successfully" Feb 8 23:59:58.798531 kubelet[2425]: W0208 23:59:58.798368 2425 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice/cri-containerd-0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca.scope WatchSource:0}: task 0bc1487d6cb73b6aabee6f5c1b48477f845f36af4233a5202ade7b1186f996ca not found: not found Feb 8 23:59:58.936470 env[1316]: time="2024-02-08T23:59:58.936142758Z" level=info msg="StopPodSandbox for \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\"" Feb 8 23:59:58.936470 env[1316]: time="2024-02-08T23:59:58.936301560Z" level=info msg="TearDown network for sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" successfully" Feb 8 23:59:58.936470 env[1316]: time="2024-02-08T23:59:58.936368661Z" level=info msg="StopPodSandbox for \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" returns successfully" Feb 8 23:59:58.938187 env[1316]: time="2024-02-08T23:59:58.937280370Z" level=info msg="RemovePodSandbox for \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\"" Feb 8 23:59:58.938187 env[1316]: time="2024-02-08T23:59:58.937319571Z" level=info msg="Forcibly stopping sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\"" Feb 8 23:59:58.938187 env[1316]: time="2024-02-08T23:59:58.937481872Z" level=info msg="TearDown network for sandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" successfully" Feb 8 23:59:58.955016 env[1316]: time="2024-02-08T23:59:58.954973651Z" level=info msg="RemovePodSandbox \"287e2cfa7704caa7f1bcbf432838d946373f44d1d342a5d6d5ec7dbdca2b9422\" returns successfully" Feb 8 23:59:58.955592 env[1316]: time="2024-02-08T23:59:58.955559357Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:59:58.955857 env[1316]: time="2024-02-08T23:59:58.955808359Z" level=info msg="TearDown network for sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" successfully" Feb 8 23:59:58.955958 env[1316]: time="2024-02-08T23:59:58.955938460Z" level=info msg="StopPodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" returns successfully" Feb 8 23:59:58.956408 env[1316]: time="2024-02-08T23:59:58.956373965Z" level=info msg="RemovePodSandbox for \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:59:58.956586 env[1316]: time="2024-02-08T23:59:58.956543267Z" level=info msg="Forcibly stopping sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\"" Feb 8 23:59:58.956769 env[1316]: time="2024-02-08T23:59:58.956730669Z" level=info msg="TearDown network for sandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" successfully" Feb 8 23:59:58.973703 env[1316]: time="2024-02-08T23:59:58.973669941Z" level=info msg="RemovePodSandbox \"9bdcd40dc5ea01871317c0648e9570c712cb08f03509ef5c68e960f36785e1d2\" returns successfully" Feb 8 23:59:58.974111 env[1316]: time="2024-02-08T23:59:58.974084846Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:59:58.974206 env[1316]: time="2024-02-08T23:59:58.974165546Z" level=info msg="TearDown network for sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" successfully" Feb 8 23:59:58.974265 env[1316]: time="2024-02-08T23:59:58.974206747Z" level=info msg="StopPodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" returns successfully" Feb 8 23:59:58.974503 env[1316]: time="2024-02-08T23:59:58.974476550Z" level=info msg="RemovePodSandbox for \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:59:58.974587 env[1316]: time="2024-02-08T23:59:58.974510750Z" level=info msg="Forcibly stopping sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\"" Feb 8 23:59:58.974638 env[1316]: time="2024-02-08T23:59:58.974588351Z" level=info msg="TearDown network for sandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" successfully" Feb 8 23:59:58.981753 env[1316]: time="2024-02-08T23:59:58.981722924Z" level=info msg="RemovePodSandbox \"09e7b84f364388f6697d6b09828c960c9a62f144608e7827ce7adf1fe76acbf6\" returns successfully" Feb 8 23:59:59.162485 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 00:00:01.353689 systemd[1]: Started logrotate.service. Feb 9 00:00:01.364146 systemd[1]: run-containerd-runc-k8s.io-5cc6c6814f24a819bc1e99e3045ae8824155ee5f9de562762881bbd6a8658028-runc.sviiDD.mount: Deactivated successfully. Feb 9 00:00:01.368251 systemd[1]: logrotate.service: Deactivated successfully. Feb 9 00:00:01.906549 kubelet[2425]: W0209 00:00:01.906498 2425 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice/cri-containerd-d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f.scope WatchSource:0}: task d5123a3d0f0e9838bd9ca661b5dbdc50f5d200c52f6b45f610fbe6e1f838802f not found: not found Feb 9 00:00:01.934356 systemd-networkd[1462]: lxc_health: Link UP Feb 9 00:00:01.953506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 00:00:01.952270 systemd-networkd[1462]: lxc_health: Gained carrier Feb 9 00:00:02.908180 kubelet[2425]: I0209 00:00:02.908131 2425 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7vszq" podStartSLOduration=8.908060995 pod.CreationTimestamp="2024-02-08 23:59:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:59:59.572074024 +0000 UTC m=+240.793727775" watchObservedRunningTime="2024-02-09 00:00:02.908060995 +0000 UTC m=+244.129714746" Feb 9 00:00:03.440632 systemd-networkd[1462]: lxc_health: Gained IPv6LL Feb 9 00:00:05.027394 kubelet[2425]: W0209 00:00:05.027329 2425 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice/cri-containerd-54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3.scope WatchSource:0}: task 54b33f2df61b5df23d17464c1fdf0935b23a0b5a58cf6afbbe94d938a96fcba3 not found: not found Feb 9 00:00:08.132560 sshd[4318]: pam_unix(sshd:session): session closed for user core Feb 9 00:00:08.138506 kubelet[2425]: W0209 00:00:08.135731 2425 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e45af99_8eaa_457e_ad01_38c640fc4cda.slice/cri-containerd-8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc.scope WatchSource:0}: task 8de6e42b4a07c397b05c58121b84e5d5a30a0fb6b20ce0b70537fe695c88a9bc not found: not found Feb 9 00:00:08.136845 systemd[1]: sshd@24-10.200.8.17:22-10.200.12.6:48636.service: Deactivated successfully. Feb 9 00:00:08.138518 systemd-logind[1304]: Session 27 logged out. Waiting for processes to exit. Feb 9 00:00:08.139619 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 00:00:08.140873 systemd-logind[1304]: Removed session 27.