Feb 12 19:44:04.015908 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 19:44:04.015932 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:44:04.015948 kernel: BIOS-provided physical RAM map: Feb 12 19:44:04.015958 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 12 19:44:04.015968 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 12 19:44:04.015975 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 12 19:44:04.015984 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 12 19:44:04.015992 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 12 19:44:04.016005 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 12 19:44:04.016016 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 12 19:44:04.016026 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 12 19:44:04.016033 kernel: printk: bootconsole [earlyser0] enabled Feb 12 19:44:04.016039 kernel: NX (Execute Disable) protection: active Feb 12 19:44:04.016045 kernel: efi: EFI v2.70 by Microsoft Feb 12 19:44:04.016064 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 12 19:44:04.016073 kernel: random: crng init done Feb 12 19:44:04.016079 kernel: SMBIOS 3.1.0 present. Feb 12 19:44:04.016087 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 12 19:44:04.016099 kernel: Hypervisor detected: Microsoft Hyper-V Feb 12 19:44:04.016109 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 12 19:44:04.016119 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 12 19:44:04.016129 kernel: Hyper-V: Nested features: 0x1e0101 Feb 12 19:44:04.029435 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 12 19:44:04.029449 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 12 19:44:04.029462 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 12 19:44:04.029473 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 12 19:44:04.029486 kernel: tsc: Detected 2593.905 MHz processor Feb 12 19:44:04.029498 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 19:44:04.029510 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 19:44:04.029522 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 12 19:44:04.029534 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 19:44:04.029545 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 12 19:44:04.029559 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 12 19:44:04.029571 kernel: Using GB pages for direct mapping Feb 12 19:44:04.029583 kernel: Secure boot disabled Feb 12 19:44:04.029594 kernel: ACPI: Early table checksum verification disabled Feb 12 19:44:04.029605 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 12 19:44:04.029617 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029629 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029641 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 12 19:44:04.029659 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 12 19:44:04.029672 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029684 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029696 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029709 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029722 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029736 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029750 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 12 19:44:04.029762 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 12 19:44:04.029775 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 12 19:44:04.029788 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 12 19:44:04.029800 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 12 19:44:04.029813 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 12 19:44:04.029825 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 12 19:44:04.029840 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 12 19:44:04.029853 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 12 19:44:04.029866 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 12 19:44:04.029878 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 12 19:44:04.029891 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 19:44:04.029904 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 19:44:04.029916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 12 19:44:04.029928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 12 19:44:04.029940 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 12 19:44:04.029955 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 12 19:44:04.029967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 12 19:44:04.029979 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 12 19:44:04.029992 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 12 19:44:04.030004 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 12 19:44:04.030017 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 12 19:44:04.030029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 12 19:44:04.030042 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 12 19:44:04.030054 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 12 19:44:04.030068 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 12 19:44:04.030080 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 12 19:44:04.030092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 12 19:44:04.030104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 12 19:44:04.030116 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 12 19:44:04.030127 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 12 19:44:04.030140 kernel: Zone ranges: Feb 12 19:44:04.030151 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 19:44:04.030162 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 12 19:44:04.030177 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:44:04.030188 kernel: Movable zone start for each node Feb 12 19:44:04.030199 kernel: Early memory node ranges Feb 12 19:44:04.030211 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 12 19:44:04.030223 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 12 19:44:04.030234 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 12 19:44:04.030246 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 12 19:44:04.030258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 12 19:44:04.030269 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 19:44:04.030283 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 12 19:44:04.030295 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 12 19:44:04.030307 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 12 19:44:04.030318 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 12 19:44:04.030329 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 12 19:44:04.030341 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 19:44:04.030353 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 19:44:04.030364 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 12 19:44:04.030376 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 19:44:04.030402 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 12 19:44:04.030413 kernel: Booting paravirtualized kernel on Hyper-V Feb 12 19:44:04.030426 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 19:44:04.030438 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 19:44:04.030449 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 19:44:04.030461 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 19:44:04.030472 kernel: pcpu-alloc: [0] 0 1 Feb 12 19:44:04.030483 kernel: Hyper-V: PV spinlocks enabled Feb 12 19:44:04.030495 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 19:44:04.030509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 12 19:44:04.030521 kernel: Policy zone: Normal Feb 12 19:44:04.030533 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:44:04.030546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 19:44:04.030558 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 12 19:44:04.030569 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 19:44:04.030581 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 19:44:04.030593 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 12 19:44:04.030607 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 19:44:04.030619 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 19:44:04.030639 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 19:44:04.030653 kernel: rcu: Hierarchical RCU implementation. Feb 12 19:44:04.030667 kernel: rcu: RCU event tracing is enabled. Feb 12 19:44:04.030679 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 19:44:04.030691 kernel: Rude variant of Tasks RCU enabled. Feb 12 19:44:04.030704 kernel: Tracing variant of Tasks RCU enabled. Feb 12 19:44:04.030716 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 19:44:04.030728 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 19:44:04.030740 kernel: Using NULL legacy PIC Feb 12 19:44:04.030755 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 12 19:44:04.030768 kernel: Console: colour dummy device 80x25 Feb 12 19:44:04.030780 kernel: printk: console [tty1] enabled Feb 12 19:44:04.030792 kernel: printk: console [ttyS0] enabled Feb 12 19:44:04.030805 kernel: printk: bootconsole [earlyser0] disabled Feb 12 19:44:04.030819 kernel: ACPI: Core revision 20210730 Feb 12 19:44:04.030831 kernel: Failed to register legacy timer interrupt Feb 12 19:44:04.030845 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 19:44:04.030857 kernel: Hyper-V: Using IPI hypercalls Feb 12 19:44:04.030870 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 12 19:44:04.030896 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 19:44:04.030909 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 19:44:04.030921 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 19:44:04.030934 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 19:44:04.030945 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 19:44:04.030965 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 19:44:04.030977 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 19:44:04.030989 kernel: RETBleed: Vulnerable Feb 12 19:44:04.031001 kernel: Speculative Store Bypass: Vulnerable Feb 12 19:44:04.031013 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:44:04.031026 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 19:44:04.031039 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 19:44:04.031051 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 19:44:04.031064 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 19:44:04.031078 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 19:44:04.031093 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 19:44:04.031106 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 19:44:04.031120 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 19:44:04.031132 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 19:44:04.031145 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 12 19:44:04.031159 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 12 19:44:04.031172 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 12 19:44:04.031185 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 12 19:44:04.031197 kernel: Freeing SMP alternatives memory: 32K Feb 12 19:44:04.031209 kernel: pid_max: default: 32768 minimum: 301 Feb 12 19:44:04.031221 kernel: LSM: Security Framework initializing Feb 12 19:44:04.031233 kernel: SELinux: Initializing. Feb 12 19:44:04.031249 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:44:04.031262 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 19:44:04.031278 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 19:44:04.031291 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 19:44:04.031304 kernel: signal: max sigframe size: 3632 Feb 12 19:44:04.031317 kernel: rcu: Hierarchical SRCU implementation. Feb 12 19:44:04.031338 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 19:44:04.031351 kernel: smp: Bringing up secondary CPUs ... Feb 12 19:44:04.031363 kernel: x86: Booting SMP configuration: Feb 12 19:44:04.031375 kernel: .... node #0, CPUs: #1 Feb 12 19:44:04.031411 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 12 19:44:04.031425 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 19:44:04.031438 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 19:44:04.031451 kernel: smpboot: Max logical packages: 1 Feb 12 19:44:04.031465 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 12 19:44:04.031478 kernel: devtmpfs: initialized Feb 12 19:44:04.031491 kernel: x86/mm: Memory block size: 128MB Feb 12 19:44:04.031503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 12 19:44:04.031518 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 19:44:04.031532 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 19:44:04.031545 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 19:44:04.031557 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 19:44:04.031569 kernel: audit: initializing netlink subsys (disabled) Feb 12 19:44:04.031581 kernel: audit: type=2000 audit(1707767042.024:1): state=initialized audit_enabled=0 res=1 Feb 12 19:44:04.031593 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 19:44:04.031607 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 19:44:04.031619 kernel: cpuidle: using governor menu Feb 12 19:44:04.031635 kernel: ACPI: bus type PCI registered Feb 12 19:44:04.031648 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 19:44:04.031661 kernel: dca service started, version 1.12.1 Feb 12 19:44:04.031673 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 19:44:04.031686 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 19:44:04.031698 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 19:44:04.031712 kernel: ACPI: Added _OSI(Module Device) Feb 12 19:44:04.031727 kernel: ACPI: Added _OSI(Processor Device) Feb 12 19:44:04.031741 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 19:44:04.031758 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 19:44:04.031772 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 19:44:04.031787 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 19:44:04.031801 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 19:44:04.031815 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 19:44:04.031828 kernel: ACPI: Interpreter enabled Feb 12 19:44:04.031841 kernel: ACPI: PM: (supports S0 S5) Feb 12 19:44:04.031855 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 19:44:04.031869 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 19:44:04.031885 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 12 19:44:04.031898 kernel: iommu: Default domain type: Translated Feb 12 19:44:04.031912 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 19:44:04.031925 kernel: vgaarb: loaded Feb 12 19:44:04.031938 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 19:44:04.031953 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 19:44:04.031966 kernel: PTP clock support registered Feb 12 19:44:04.031980 kernel: Registered efivars operations Feb 12 19:44:04.031994 kernel: PCI: Using ACPI for IRQ routing Feb 12 19:44:04.032008 kernel: PCI: System does not support PCI Feb 12 19:44:04.032024 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 12 19:44:04.032038 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 19:44:04.032052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 19:44:04.032065 kernel: pnp: PnP ACPI init Feb 12 19:44:04.032080 kernel: pnp: PnP ACPI: found 3 devices Feb 12 19:44:04.032093 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 19:44:04.032107 kernel: NET: Registered PF_INET protocol family Feb 12 19:44:04.032120 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 19:44:04.032138 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 12 19:44:04.032151 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 19:44:04.032165 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 19:44:04.032179 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 12 19:44:04.032192 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 12 19:44:04.032206 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:44:04.032220 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 12 19:44:04.032233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 19:44:04.032247 kernel: NET: Registered PF_XDP protocol family Feb 12 19:44:04.032264 kernel: PCI: CLS 0 bytes, default 64 Feb 12 19:44:04.032277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 12 19:44:04.032291 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 12 19:44:04.032305 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 19:44:04.032318 kernel: Initialise system trusted keyrings Feb 12 19:44:04.032331 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 12 19:44:04.032345 kernel: Key type asymmetric registered Feb 12 19:44:04.032358 kernel: Asymmetric key parser 'x509' registered Feb 12 19:44:04.032372 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 19:44:04.032403 kernel: io scheduler mq-deadline registered Feb 12 19:44:04.032425 kernel: io scheduler kyber registered Feb 12 19:44:04.032438 kernel: io scheduler bfq registered Feb 12 19:44:04.032452 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 19:44:04.032466 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 19:44:04.032479 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 19:44:04.032493 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 12 19:44:04.032506 kernel: i8042: PNP: No PS/2 controller found. Feb 12 19:44:04.032674 kernel: rtc_cmos 00:02: registered as rtc0 Feb 12 19:44:04.032799 kernel: rtc_cmos 00:02: setting system clock to 2024-02-12T19:44:03 UTC (1707767043) Feb 12 19:44:04.032909 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 12 19:44:04.032926 kernel: fail to initialize ptp_kvm Feb 12 19:44:04.032938 kernel: intel_pstate: CPU model not supported Feb 12 19:44:04.032951 kernel: efifb: probing for efifb Feb 12 19:44:04.032964 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 12 19:44:04.032976 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 12 19:44:04.032989 kernel: efifb: scrolling: redraw Feb 12 19:44:04.033005 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 12 19:44:04.033018 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:44:04.033031 kernel: fb0: EFI VGA frame buffer device Feb 12 19:44:04.033044 kernel: pstore: Registered efi as persistent store backend Feb 12 19:44:04.033057 kernel: NET: Registered PF_INET6 protocol family Feb 12 19:44:04.033070 kernel: Segment Routing with IPv6 Feb 12 19:44:04.033082 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 19:44:04.033096 kernel: NET: Registered PF_PACKET protocol family Feb 12 19:44:04.033111 kernel: Key type dns_resolver registered Feb 12 19:44:04.033128 kernel: IPI shorthand broadcast: enabled Feb 12 19:44:04.033142 kernel: sched_clock: Marking stable (787286000, 24867700)->(1010346100, -198192400) Feb 12 19:44:04.033156 kernel: registered taskstats version 1 Feb 12 19:44:04.033171 kernel: Loading compiled-in X.509 certificates Feb 12 19:44:04.033185 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 19:44:04.033199 kernel: Key type .fscrypt registered Feb 12 19:44:04.033213 kernel: Key type fscrypt-provisioning registered Feb 12 19:44:04.033228 kernel: pstore: Using crash dump compression: deflate Feb 12 19:44:04.033245 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 19:44:04.033261 kernel: ima: Allocated hash algorithm: sha1 Feb 12 19:44:04.033275 kernel: ima: No architecture policies found Feb 12 19:44:04.033289 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 19:44:04.033302 kernel: Write protecting the kernel read-only data: 28672k Feb 12 19:44:04.033316 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 19:44:04.033328 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 19:44:04.033341 kernel: Run /init as init process Feb 12 19:44:04.033353 kernel: with arguments: Feb 12 19:44:04.033366 kernel: /init Feb 12 19:44:04.033381 kernel: with environment: Feb 12 19:44:04.033407 kernel: HOME=/ Feb 12 19:44:04.033419 kernel: TERM=linux Feb 12 19:44:04.033432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 19:44:04.033447 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:44:04.033463 systemd[1]: Detected virtualization microsoft. Feb 12 19:44:04.033477 systemd[1]: Detected architecture x86-64. Feb 12 19:44:04.033493 systemd[1]: Running in initrd. Feb 12 19:44:04.033506 systemd[1]: No hostname configured, using default hostname. Feb 12 19:44:04.033519 systemd[1]: Hostname set to . Feb 12 19:44:04.033533 systemd[1]: Initializing machine ID from random generator. Feb 12 19:44:04.033546 systemd[1]: Queued start job for default target initrd.target. Feb 12 19:44:04.033560 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:44:04.033574 systemd[1]: Reached target cryptsetup.target. Feb 12 19:44:04.033587 systemd[1]: Reached target paths.target. Feb 12 19:44:04.033600 systemd[1]: Reached target slices.target. Feb 12 19:44:04.033616 systemd[1]: Reached target swap.target. Feb 12 19:44:04.033629 systemd[1]: Reached target timers.target. Feb 12 19:44:04.033643 systemd[1]: Listening on iscsid.socket. Feb 12 19:44:04.033657 systemd[1]: Listening on iscsiuio.socket. Feb 12 19:44:04.033671 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 19:44:04.033685 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 19:44:04.033698 systemd[1]: Listening on systemd-journald.socket. Feb 12 19:44:04.033715 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:44:04.033728 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:44:04.033741 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:44:04.033755 systemd[1]: Reached target sockets.target. Feb 12 19:44:04.033769 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:44:04.033783 systemd[1]: Finished network-cleanup.service. Feb 12 19:44:04.033797 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 19:44:04.033811 systemd[1]: Starting systemd-journald.service... Feb 12 19:44:04.033826 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:44:04.033842 systemd[1]: Starting systemd-resolved.service... Feb 12 19:44:04.033856 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 19:44:04.033873 systemd-journald[183]: Journal started Feb 12 19:44:04.033942 systemd-journald[183]: Runtime Journal (/run/log/journal/899b58e2b7084b4f92f30bab955dade8) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:44:04.029306 systemd-modules-load[184]: Inserted module 'overlay' Feb 12 19:44:04.052124 systemd[1]: Started systemd-journald.service. Feb 12 19:44:04.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.052539 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:44:04.076097 kernel: audit: type=1130 audit(1707767044.051:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.072130 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 19:44:04.076246 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 19:44:04.082900 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 19:44:04.144136 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 19:44:04.144164 kernel: audit: type=1130 audit(1707767044.071:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.144181 kernel: Bridge firewalling registered Feb 12 19:44:04.144196 kernel: audit: type=1130 audit(1707767044.075:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.101708 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 19:44:04.121967 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 19:44:04.128291 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 12 19:44:04.147019 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 19:44:04.152139 systemd[1]: Starting dracut-cmdline.service... Feb 12 19:44:04.169246 dracut-cmdline[201]: dracut-dracut-053 Feb 12 19:44:04.171902 systemd-resolved[185]: Positive Trust Anchors: Feb 12 19:44:04.191696 kernel: audit: type=1130 audit(1707767044.080:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.186200 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:44:04.200209 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 19:44:04.230896 kernel: SCSI subsystem initialized Feb 12 19:44:04.230921 kernel: audit: type=1130 audit(1707767044.122:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.186243 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:44:04.190581 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 12 19:44:04.191663 systemd[1]: Started systemd-resolved.service. Feb 12 19:44:04.193925 systemd[1]: Reached target nss-lookup.target. Feb 12 19:44:04.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.277425 kernel: audit: type=1130 audit(1707767044.150:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.277456 kernel: audit: type=1130 audit(1707767044.193:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.277474 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 19:44:04.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.289455 kernel: device-mapper: uevent: version 1.0.3 Feb 12 19:44:04.294983 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 19:44:04.298658 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 12 19:44:04.304183 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:44:04.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.308378 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:04.324589 kernel: audit: type=1130 audit(1707767044.306:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.327309 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:04.343565 kernel: Loading iSCSI transport class v2.0-870. Feb 12 19:44:04.343595 kernel: audit: type=1130 audit(1707767044.330:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.357411 kernel: iscsi: registered transport (tcp) Feb 12 19:44:04.382587 kernel: iscsi: registered transport (qla4xxx) Feb 12 19:44:04.382626 kernel: QLogic iSCSI HBA Driver Feb 12 19:44:04.411922 systemd[1]: Finished dracut-cmdline.service. Feb 12 19:44:04.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.418855 systemd[1]: Starting dracut-pre-udev.service... Feb 12 19:44:04.469411 kernel: raid6: avx512x4 gen() 18109 MB/s Feb 12 19:44:04.489410 kernel: raid6: avx512x4 xor() 7973 MB/s Feb 12 19:44:04.509400 kernel: raid6: avx512x2 gen() 18007 MB/s Feb 12 19:44:04.530405 kernel: raid6: avx512x2 xor() 29868 MB/s Feb 12 19:44:04.550398 kernel: raid6: avx512x1 gen() 18055 MB/s Feb 12 19:44:04.570399 kernel: raid6: avx512x1 xor() 26955 MB/s Feb 12 19:44:04.590402 kernel: raid6: avx2x4 gen() 17947 MB/s Feb 12 19:44:04.610401 kernel: raid6: avx2x4 xor() 7709 MB/s Feb 12 19:44:04.630399 kernel: raid6: avx2x2 gen() 17897 MB/s Feb 12 19:44:04.650405 kernel: raid6: avx2x2 xor() 22187 MB/s Feb 12 19:44:04.670399 kernel: raid6: avx2x1 gen() 13882 MB/s Feb 12 19:44:04.690400 kernel: raid6: avx2x1 xor() 19478 MB/s Feb 12 19:44:04.710402 kernel: raid6: sse2x4 gen() 11715 MB/s Feb 12 19:44:04.731399 kernel: raid6: sse2x4 xor() 7124 MB/s Feb 12 19:44:04.751399 kernel: raid6: sse2x2 gen() 12883 MB/s Feb 12 19:44:04.771402 kernel: raid6: sse2x2 xor() 7531 MB/s Feb 12 19:44:04.791399 kernel: raid6: sse2x1 gen() 11645 MB/s Feb 12 19:44:04.813810 kernel: raid6: sse2x1 xor() 5911 MB/s Feb 12 19:44:04.813828 kernel: raid6: using algorithm avx512x4 gen() 18109 MB/s Feb 12 19:44:04.813838 kernel: raid6: .... xor() 7973 MB/s, rmw enabled Feb 12 19:44:04.820800 kernel: raid6: using avx512x2 recovery algorithm Feb 12 19:44:04.838417 kernel: xor: automatically using best checksumming function avx Feb 12 19:44:04.934427 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 19:44:04.942524 systemd[1]: Finished dracut-pre-udev.service. Feb 12 19:44:04.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.946000 audit: BPF prog-id=7 op=LOAD Feb 12 19:44:04.946000 audit: BPF prog-id=8 op=LOAD Feb 12 19:44:04.947593 systemd[1]: Starting systemd-udevd.service... Feb 12 19:44:04.962948 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 19:44:04.967620 systemd[1]: Started systemd-udevd.service. Feb 12 19:44:04.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:04.971549 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 19:44:04.987876 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Feb 12 19:44:05.015939 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 19:44:05.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:05.019423 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:44:05.057234 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:44:05.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:05.110683 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 19:44:05.126410 kernel: hv_vmbus: Vmbus version:5.2 Feb 12 19:44:05.138417 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 12 19:44:05.149964 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 19:44:05.150007 kernel: AES CTR mode by8 optimization enabled Feb 12 19:44:05.164409 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 12 19:44:05.182407 kernel: hv_vmbus: registering driver hv_netvsc Feb 12 19:44:05.197031 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 19:44:05.197080 kernel: hv_vmbus: registering driver hv_storvsc Feb 12 19:44:05.204409 kernel: hv_vmbus: registering driver hid_hyperv Feb 12 19:44:05.214513 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 12 19:44:05.214551 kernel: scsi host0: storvsc_host_t Feb 12 19:44:05.226657 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 12 19:44:05.226820 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 12 19:44:05.226846 kernel: scsi host1: storvsc_host_t Feb 12 19:44:05.229847 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 12 19:44:05.263014 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 12 19:44:05.263231 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 19:44:05.270410 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 12 19:44:05.270584 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 12 19:44:05.270705 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 12 19:44:05.273413 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 12 19:44:05.276408 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 12 19:44:05.276566 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 12 19:44:05.289414 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:44:05.294411 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 12 19:44:05.377072 kernel: hv_netvsc 000d3ab3-caa7-000d-3ab3-caa7000d3ab3 eth0: VF slot 1 added Feb 12 19:44:05.387408 kernel: hv_vmbus: registering driver hv_pci Feb 12 19:44:05.396413 kernel: hv_pci ee104931-1e66-401b-8621-6400e68192b7: PCI VMBus probing: Using version 0x10004 Feb 12 19:44:05.412436 kernel: hv_pci ee104931-1e66-401b-8621-6400e68192b7: PCI host bridge to bus 1e66:00 Feb 12 19:44:05.412597 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (440) Feb 12 19:44:05.412611 kernel: pci_bus 1e66:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 12 19:44:05.420991 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 19:44:05.426849 kernel: pci_bus 1e66:00: No busn resource found for root bus, will use [bus 00-ff] Feb 12 19:44:05.438402 kernel: pci 1e66:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 12 19:44:05.439086 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:44:05.446955 kernel: pci 1e66:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:44:05.466454 kernel: pci 1e66:00:02.0: enabling Extended Tags Feb 12 19:44:05.478669 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 19:44:05.483547 kernel: pci 1e66:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1e66:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 12 19:44:05.498018 kernel: pci_bus 1e66:00: busn_res: [bus 00-ff] end is updated to 00 Feb 12 19:44:05.498173 kernel: pci 1e66:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 12 19:44:05.498661 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 19:44:05.504322 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 19:44:05.513220 systemd[1]: Starting disk-uuid.service... Feb 12 19:44:05.531571 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:44:05.543408 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:44:05.631418 kernel: mlx5_core 1e66:00:02.0: firmware version: 14.30.1350 Feb 12 19:44:05.815413 kernel: mlx5_core 1e66:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 12 19:44:05.957178 kernel: mlx5_core 1e66:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 12 19:44:05.957441 kernel: mlx5_core 1e66:00:02.0: mlx5e_tc_post_act_init:40:(pid 199): firmware level support is missing Feb 12 19:44:05.970247 kernel: hv_netvsc 000d3ab3-caa7-000d-3ab3-caa7000d3ab3 eth0: VF registering: eth1 Feb 12 19:44:05.970431 kernel: mlx5_core 1e66:00:02.0 eth1: joined to eth0 Feb 12 19:44:05.982411 kernel: mlx5_core 1e66:00:02.0 enP7782s1: renamed from eth1 Feb 12 19:44:06.541414 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 12 19:44:06.542740 disk-uuid[549]: The operation has completed successfully. Feb 12 19:44:06.616918 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 19:44:06.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.617018 systemd[1]: Finished disk-uuid.service. Feb 12 19:44:06.624695 systemd[1]: Starting verity-setup.service... Feb 12 19:44:06.651407 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 19:44:06.745422 systemd[1]: Found device dev-mapper-usr.device. Feb 12 19:44:06.751624 systemd[1]: Finished verity-setup.service. Feb 12 19:44:06.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.756478 systemd[1]: Mounting sysusr-usr.mount... Feb 12 19:44:06.830405 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 19:44:06.830767 systemd[1]: Mounted sysusr-usr.mount. Feb 12 19:44:06.834597 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 19:44:06.838957 systemd[1]: Starting ignition-setup.service... Feb 12 19:44:06.844815 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 19:44:06.870459 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:44:06.870517 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:44:06.870535 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:44:06.900400 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 19:44:06.920449 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 19:44:06.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.925000 audit: BPF prog-id=9 op=LOAD Feb 12 19:44:06.926537 systemd[1]: Starting systemd-networkd.service... Feb 12 19:44:06.949538 systemd-networkd[804]: lo: Link UP Feb 12 19:44:06.949547 systemd-networkd[804]: lo: Gained carrier Feb 12 19:44:06.950084 systemd-networkd[804]: Enumeration completed Feb 12 19:44:06.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.951186 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:44:06.952468 systemd[1]: Started systemd-networkd.service. Feb 12 19:44:06.955908 systemd[1]: Reached target network.target. Feb 12 19:44:06.973117 systemd[1]: Starting iscsiuio.service... Feb 12 19:44:06.977231 systemd[1]: Started iscsiuio.service. Feb 12 19:44:06.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.981881 systemd[1]: Starting iscsid.service... Feb 12 19:44:06.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.984018 systemd[1]: Finished ignition-setup.service. Feb 12 19:44:06.987466 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 19:44:07.001318 iscsid[810]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:44:07.001318 iscsid[810]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 19:44:07.001318 iscsid[810]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 19:44:07.001318 iscsid[810]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 19:44:07.001318 iscsid[810]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 19:44:07.001318 iscsid[810]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 19:44:07.001318 iscsid[810]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 19:44:06.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:07.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:06.993460 systemd[1]: Started iscsid.service. Feb 12 19:44:06.997791 systemd[1]: Starting dracut-initqueue.service... Feb 12 19:44:07.059695 kernel: mlx5_core 1e66:00:02.0 enP7782s1: Link up Feb 12 19:44:07.025975 systemd[1]: Finished dracut-initqueue.service. Feb 12 19:44:07.029341 systemd[1]: Reached target remote-fs-pre.target. Feb 12 19:44:07.033997 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:44:07.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:07.039414 systemd[1]: Reached target remote-fs.target. Feb 12 19:44:07.044853 systemd[1]: Starting dracut-pre-mount.service... Feb 12 19:44:07.062166 systemd[1]: Finished dracut-pre-mount.service. Feb 12 19:44:07.133741 kernel: hv_netvsc 000d3ab3-caa7-000d-3ab3-caa7000d3ab3 eth0: Data path switched to VF: enP7782s1 Feb 12 19:44:07.139026 systemd-networkd[804]: enP7782s1: Link UP Feb 12 19:44:07.141608 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 19:44:07.139169 systemd-networkd[804]: eth0: Link UP Feb 12 19:44:07.139403 systemd-networkd[804]: eth0: Gained carrier Feb 12 19:44:07.143592 systemd-networkd[804]: enP7782s1: Gained carrier Feb 12 19:44:07.168471 systemd-networkd[804]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:44:07.903745 ignition[811]: Ignition 2.14.0 Feb 12 19:44:07.903758 ignition[811]: Stage: fetch-offline Feb 12 19:44:07.903846 ignition[811]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:07.903891 ignition[811]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:07.932179 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:07.932362 ignition[811]: parsed url from cmdline: "" Feb 12 19:44:07.933570 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 19:44:07.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:07.932366 ignition[811]: no config URL provided Feb 12 19:44:07.932371 ignition[811]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:44:07.932379 ignition[811]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:44:07.932384 ignition[811]: failed to fetch config: resource requires networking Feb 12 19:44:07.932638 ignition[811]: Ignition finished successfully Feb 12 19:44:07.953768 systemd[1]: Starting ignition-fetch.service... Feb 12 19:44:07.961433 ignition[830]: Ignition 2.14.0 Feb 12 19:44:07.961443 ignition[830]: Stage: fetch Feb 12 19:44:07.961567 ignition[830]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:07.961599 ignition[830]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:07.964999 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:07.965168 ignition[830]: parsed url from cmdline: "" Feb 12 19:44:07.965172 ignition[830]: no config URL provided Feb 12 19:44:07.965182 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 19:44:07.965191 ignition[830]: no config at "/usr/lib/ignition/user.ign" Feb 12 19:44:07.965237 ignition[830]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 12 19:44:07.996812 ignition[830]: GET result: OK Feb 12 19:44:07.996976 ignition[830]: config has been read from IMDS userdata Feb 12 19:44:07.997008 ignition[830]: parsing config with SHA512: 7a4077a2b83ffaa020ff8f61a151aafddfde99dca3e6a643cfedb4c24a2ab625be201b42aaa3f2ff5e0faf3f377eac8bd0ff3fd973381dcceeff6613ac115a63 Feb 12 19:44:08.033174 unknown[830]: fetched base config from "system" Feb 12 19:44:08.034423 unknown[830]: fetched base config from "system" Feb 12 19:44:08.035056 ignition[830]: fetch: fetch complete Feb 12 19:44:08.034430 unknown[830]: fetched user config from "azure" Feb 12 19:44:08.035061 ignition[830]: fetch: fetch passed Feb 12 19:44:08.035098 ignition[830]: Ignition finished successfully Feb 12 19:44:08.046229 systemd[1]: Finished ignition-fetch.service. Feb 12 19:44:08.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.049283 systemd[1]: Starting ignition-kargs.service... Feb 12 19:44:08.061063 ignition[836]: Ignition 2.14.0 Feb 12 19:44:08.061072 ignition[836]: Stage: kargs Feb 12 19:44:08.061225 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:08.061257 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:08.068526 systemd[1]: Finished ignition-kargs.service. Feb 12 19:44:08.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.064968 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:08.071654 systemd[1]: Starting ignition-disks.service... Feb 12 19:44:08.067355 ignition[836]: kargs: kargs passed Feb 12 19:44:08.067468 ignition[836]: Ignition finished successfully Feb 12 19:44:08.081185 ignition[842]: Ignition 2.14.0 Feb 12 19:44:08.081191 ignition[842]: Stage: disks Feb 12 19:44:08.081284 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:08.081306 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:08.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.089854 systemd[1]: Finished ignition-disks.service. Feb 12 19:44:08.084513 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:08.093341 systemd[1]: Reached target initrd-root-device.target. Feb 12 19:44:08.088751 ignition[842]: disks: disks passed Feb 12 19:44:08.098338 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:44:08.089276 ignition[842]: Ignition finished successfully Feb 12 19:44:08.102433 systemd[1]: Reached target local-fs.target. Feb 12 19:44:08.104503 systemd[1]: Reached target sysinit.target. Feb 12 19:44:08.109184 systemd[1]: Reached target basic.target. Feb 12 19:44:08.112004 systemd[1]: Starting systemd-fsck-root.service... Feb 12 19:44:08.137202 systemd-fsck[850]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 12 19:44:08.140490 systemd[1]: Finished systemd-fsck-root.service. Feb 12 19:44:08.145715 systemd[1]: Mounting sysroot.mount... Feb 12 19:44:08.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.161410 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 19:44:08.161518 systemd[1]: Mounted sysroot.mount. Feb 12 19:44:08.163573 systemd[1]: Reached target initrd-root-fs.target. Feb 12 19:44:08.175113 systemd[1]: Mounting sysroot-usr.mount... Feb 12 19:44:08.180304 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 12 19:44:08.185364 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 19:44:08.186344 systemd[1]: Reached target ignition-diskful.target. Feb 12 19:44:08.195341 systemd[1]: Mounted sysroot-usr.mount. Feb 12 19:44:08.207606 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:44:08.212781 systemd[1]: Starting initrd-setup-root.service... Feb 12 19:44:08.228616 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 19:44:08.231924 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (860) Feb 12 19:44:08.245464 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:44:08.245508 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:44:08.245526 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:44:08.245543 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Feb 12 19:44:08.248916 initrd-setup-root[883]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 19:44:08.255122 initrd-setup-root[907]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 19:44:08.260609 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:44:08.272530 systemd-networkd[804]: eth0: Gained IPv6LL Feb 12 19:44:08.394121 systemd[1]: Finished initrd-setup-root.service. Feb 12 19:44:08.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.402493 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 19:44:08.402535 kernel: audit: type=1130 audit(1707767048.396:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.402604 systemd[1]: Starting ignition-mount.service... Feb 12 19:44:08.419450 systemd[1]: Starting sysroot-boot.service... Feb 12 19:44:08.424801 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 19:44:08.424942 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 19:44:08.449297 ignition[928]: INFO : Ignition 2.14.0 Feb 12 19:44:08.451850 ignition[928]: INFO : Stage: mount Feb 12 19:44:08.453790 ignition[928]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:08.453790 ignition[928]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:08.468466 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:08.472724 ignition[928]: INFO : mount: mount passed Feb 12 19:44:08.472724 ignition[928]: INFO : Ignition finished successfully Feb 12 19:44:08.477560 systemd[1]: Finished sysroot-boot.service. Feb 12 19:44:08.495475 kernel: audit: type=1130 audit(1707767048.481:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.481834 systemd[1]: Finished ignition-mount.service. Feb 12 19:44:08.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.512405 kernel: audit: type=1130 audit(1707767048.499:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.633910 coreos-metadata[859]: Feb 12 19:44:08.633 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 12 19:44:08.641586 coreos-metadata[859]: Feb 12 19:44:08.641 INFO Fetch successful Feb 12 19:44:08.674080 coreos-metadata[859]: Feb 12 19:44:08.673 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 12 19:44:08.693237 coreos-metadata[859]: Feb 12 19:44:08.693 INFO Fetch successful Feb 12 19:44:08.698171 coreos-metadata[859]: Feb 12 19:44:08.698 INFO wrote hostname ci-3510.3.2-a-e615f4b643 to /sysroot/etc/hostname Feb 12 19:44:08.700042 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 12 19:44:08.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.708305 systemd[1]: Starting ignition-files.service... Feb 12 19:44:08.721740 kernel: audit: type=1130 audit(1707767048.707:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:08.727360 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 19:44:08.742410 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (938) Feb 12 19:44:08.751762 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 19:44:08.751798 kernel: BTRFS info (device sda6): using free space tree Feb 12 19:44:08.751813 kernel: BTRFS info (device sda6): has skinny extents Feb 12 19:44:08.760379 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 19:44:08.772803 ignition[957]: INFO : Ignition 2.14.0 Feb 12 19:44:08.772803 ignition[957]: INFO : Stage: files Feb 12 19:44:08.776855 ignition[957]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:08.776855 ignition[957]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:08.790256 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:08.798482 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 12 19:44:08.805160 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 19:44:08.809045 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 19:44:08.813601 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 19:44:08.817359 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 19:44:08.821015 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 19:44:08.821015 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:44:08.821015 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:08.817954 unknown[957]: wrote ssh authorized keys file for user: core Feb 12 19:44:09.442674 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 19:44:09.575563 ignition[957]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 19:44:09.583355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 19:44:09.583355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:44:09.583355 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:09.860379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 19:44:09.966528 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 19:44:09.972108 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:44:09.972108 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 19:44:10.455337 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 19:44:10.623300 ignition[957]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 19:44:10.631348 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 19:44:10.631348 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:44:10.641531 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 12 19:44:10.894569 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 12 19:44:11.113622 ignition[957]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 12 19:44:11.121622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 19:44:11.121622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:44:11.121622 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 12 19:44:11.246092 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 19:44:11.456925 ignition[957]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 12 19:44:11.465040 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 19:44:11.465040 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:44:11.465040 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 12 19:44:11.595108 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 19:44:12.028649 ignition[957]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 12 19:44:12.038662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 19:44:12.038662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:44:12.038662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 19:44:12.038662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:44:12.038662 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 19:44:12.549605 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 19:44:12.638299 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 19:44:12.643624 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 19:44:12.648503 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:44:12.656732 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:44:12.712920 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (959) Feb 12 19:44:12.712949 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem10633377" Feb 12 19:44:12.712949 ignition[957]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem10633377": device or resource busy Feb 12 19:44:12.712949 ignition[957]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem10633377", trying btrfs: device or resource busy Feb 12 19:44:12.712949 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem10633377" Feb 12 19:44:12.712949 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem10633377" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem10633377" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem10633377" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1299084930" Feb 12 19:44:12.742488 ignition[957]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1299084930": device or resource busy Feb 12 19:44:12.742488 ignition[957]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1299084930", trying btrfs: device or resource busy Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1299084930" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1299084930" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem1299084930" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem1299084930" Feb 12 19:44:12.742488 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 12 19:44:12.742488 ignition[957]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 12 19:44:12.923795 kernel: audit: type=1130 audit(1707767052.755:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.923832 kernel: audit: type=1130 audit(1707767052.798:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.923853 kernel: audit: type=1130 audit(1707767052.824:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.923877 kernel: audit: type=1131 audit(1707767052.824:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.923895 kernel: audit: type=1130 audit(1707767052.886:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.923912 kernel: audit: type=1131 audit(1707767052.896:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.719950 systemd[1]: mnt-oem10633377.mount: Deactivated successfully. Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1e): [started] processing unit "prepare-cni-plugins.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(1e): [finished] processing unit "prepare-cni-plugins.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 19:44:12.928547 ignition[957]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:44:12.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:12.738324 systemd[1]: mnt-oem1299084930.mount: Deactivated successfully. Feb 12 19:44:13.005187 ignition[957]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 19:44:13.005187 ignition[957]: INFO : files: op(24): [started] setting preset to enabled for "waagent.service" Feb 12 19:44:13.005187 ignition[957]: INFO : files: op(24): [finished] setting preset to enabled for "waagent.service" Feb 12 19:44:13.005187 ignition[957]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:44:13.005187 ignition[957]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 19:44:13.005187 ignition[957]: INFO : files: files passed Feb 12 19:44:13.005187 ignition[957]: INFO : Ignition finished successfully Feb 12 19:44:12.753128 systemd[1]: Finished ignition-files.service. Feb 12 19:44:13.009413 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 19:44:12.772503 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 19:44:12.776019 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 19:44:12.785342 systemd[1]: Starting ignition-quench.service... Feb 12 19:44:12.791523 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 19:44:12.799255 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 19:44:12.799330 systemd[1]: Finished ignition-quench.service. Feb 12 19:44:12.824722 systemd[1]: Reached target ignition-complete.target. Feb 12 19:44:12.857239 systemd[1]: Starting initrd-parse-etc.service... Feb 12 19:44:12.879474 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 19:44:12.879565 systemd[1]: Finished initrd-parse-etc.service. Feb 12 19:44:12.899280 systemd[1]: Reached target initrd-fs.target. Feb 12 19:44:12.912119 systemd[1]: Reached target initrd.target. Feb 12 19:44:12.917265 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 19:44:12.918104 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 19:44:12.933601 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 19:44:12.937109 systemd[1]: Starting initrd-cleanup.service... Feb 12 19:44:12.955468 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 19:44:12.955562 systemd[1]: Finished initrd-cleanup.service. Feb 12 19:44:12.963498 systemd[1]: Stopped target nss-lookup.target. Feb 12 19:44:12.970124 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 19:44:12.975915 systemd[1]: Stopped target timers.target. Feb 12 19:44:12.981497 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 19:44:12.981544 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 19:44:12.989549 systemd[1]: Stopped target initrd.target. Feb 12 19:44:12.997064 systemd[1]: Stopped target basic.target. Feb 12 19:44:13.114294 systemd[1]: Stopped target ignition-complete.target. Feb 12 19:44:13.119467 systemd[1]: Stopped target ignition-diskful.target. Feb 12 19:44:13.124038 systemd[1]: Stopped target initrd-root-device.target. Feb 12 19:44:13.128766 systemd[1]: Stopped target remote-fs.target. Feb 12 19:44:13.132910 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 19:44:13.137334 systemd[1]: Stopped target sysinit.target. Feb 12 19:44:13.141212 systemd[1]: Stopped target local-fs.target. Feb 12 19:44:13.145360 systemd[1]: Stopped target local-fs-pre.target. Feb 12 19:44:13.151676 systemd[1]: Stopped target swap.target. Feb 12 19:44:13.155288 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 19:44:13.157860 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 19:44:13.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.162200 systemd[1]: Stopped target cryptsetup.target. Feb 12 19:44:13.166360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 19:44:13.168803 systemd[1]: Stopped dracut-initqueue.service. Feb 12 19:44:13.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.173116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 19:44:13.173174 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 19:44:13.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.181287 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 19:44:13.181339 systemd[1]: Stopped ignition-files.service. Feb 12 19:44:13.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.187802 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 12 19:44:13.187851 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 12 19:44:13.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.205278 iscsid[810]: iscsid shutting down. Feb 12 19:44:13.195981 systemd[1]: Stopping ignition-mount.service... Feb 12 19:44:13.198172 systemd[1]: Stopping iscsid.service... Feb 12 19:44:13.199969 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 19:44:13.200028 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 19:44:13.203487 systemd[1]: Stopping sysroot-boot.service... Feb 12 19:44:13.207038 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 19:44:13.207113 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 19:44:13.215636 ignition[995]: INFO : Ignition 2.14.0 Feb 12 19:44:13.215636 ignition[995]: INFO : Stage: umount Feb 12 19:44:13.215636 ignition[995]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 19:44:13.215636 ignition[995]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 12 19:44:13.215636 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 12 19:44:13.221333 ignition[995]: INFO : umount: umount passed Feb 12 19:44:13.223000 ignition[995]: INFO : Ignition finished successfully Feb 12 19:44:13.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.242383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 19:44:13.242453 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 19:44:13.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.251407 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 19:44:13.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.252276 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 19:44:13.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.252378 systemd[1]: Stopped iscsid.service. Feb 12 19:44:13.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.254702 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 19:44:13.254796 systemd[1]: Stopped ignition-mount.service. Feb 12 19:44:13.257154 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 19:44:13.257205 systemd[1]: Stopped ignition-disks.service. Feb 12 19:44:13.264326 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 19:44:13.264377 systemd[1]: Stopped ignition-kargs.service. Feb 12 19:44:13.268265 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 19:44:13.268313 systemd[1]: Stopped ignition-fetch.service. Feb 12 19:44:13.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.287244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 19:44:13.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.287297 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 19:44:13.289684 systemd[1]: Stopped target paths.target. Feb 12 19:44:13.294687 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 19:44:13.299430 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 19:44:13.304371 systemd[1]: Stopped target slices.target. Feb 12 19:44:13.306344 systemd[1]: Stopped target sockets.target. Feb 12 19:44:13.310828 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 19:44:13.312686 systemd[1]: Closed iscsid.socket. Feb 12 19:44:13.319869 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 19:44:13.319933 systemd[1]: Stopped ignition-setup.service. Feb 12 19:44:13.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.326959 systemd[1]: Stopping iscsiuio.service... Feb 12 19:44:13.330308 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 19:44:13.330450 systemd[1]: Stopped iscsiuio.service. Feb 12 19:44:13.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.334687 systemd[1]: Stopped target network.target. Feb 12 19:44:13.338363 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 19:44:13.338416 systemd[1]: Closed iscsiuio.socket. Feb 12 19:44:13.346181 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:44:13.348331 systemd[1]: Stopping systemd-resolved.service... Feb 12 19:44:13.353438 systemd-networkd[804]: eth0: DHCPv6 lease lost Feb 12 19:44:13.361414 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:44:13.361678 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:44:13.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.368549 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 19:44:13.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.368658 systemd[1]: Stopped systemd-resolved.service. Feb 12 19:44:13.375000 audit: BPF prog-id=9 op=UNLOAD Feb 12 19:44:13.375866 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 19:44:13.378000 audit: BPF prog-id=6 op=UNLOAD Feb 12 19:44:13.375914 systemd[1]: Closed systemd-networkd.socket. Feb 12 19:44:13.381124 systemd[1]: Stopping network-cleanup.service... Feb 12 19:44:13.386261 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 19:44:13.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.386319 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 19:44:13.402464 kernel: kauditd_printk_skb: 26 callbacks suppressed Feb 12 19:44:13.402495 kernel: audit: type=1131 audit(1707767053.398:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.391634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:44:13.391684 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:44:13.393903 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 19:44:13.393942 systemd[1]: Stopped systemd-modules-load.service. Feb 12 19:44:13.398358 systemd[1]: Stopping systemd-udevd.service... Feb 12 19:44:13.427795 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 19:44:13.427946 systemd[1]: Stopped systemd-udevd.service. Feb 12 19:44:13.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.435434 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 19:44:13.435480 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 19:44:13.453546 kernel: audit: type=1131 audit(1707767053.432:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.453693 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 19:44:13.453754 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 19:44:13.460416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 19:44:13.460474 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 19:44:13.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.469033 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 19:44:13.502528 kernel: audit: type=1131 audit(1707767053.468:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.502566 kernel: audit: type=1131 audit(1707767053.481:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.469080 systemd[1]: Stopped dracut-cmdline.service. Feb 12 19:44:13.539522 kernel: audit: type=1131 audit(1707767053.481:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.539548 kernel: audit: type=1131 audit(1707767053.481:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.539561 kernel: hv_netvsc 000d3ab3-caa7-000d-3ab3-caa7000d3ab3 eth0: Data path switched from VF: enP7782s1 Feb 12 19:44:13.539725 kernel: audit: type=1130 audit(1707767053.537:76): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.482806 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 19:44:13.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.565191 kernel: audit: type=1131 audit(1707767053.539:77): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.482847 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 19:44:13.484148 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 19:44:13.484306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 19:44:13.484351 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 19:44:13.491557 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 19:44:13.491650 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 19:44:13.588612 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 19:44:13.588820 systemd[1]: Stopped network-cleanup.service. Feb 12 19:44:13.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.608406 kernel: audit: type=1131 audit(1707767053.594:78): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:13.674564 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 19:44:14.294067 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 19:44:14.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:14.294180 systemd[1]: Stopped sysroot-boot.service. Feb 12 19:44:14.316558 kernel: audit: type=1131 audit(1707767054.296:79): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:14.296788 systemd[1]: Reached target initrd-switch-root.target. Feb 12 19:44:14.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:14.312444 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 19:44:14.312503 systemd[1]: Stopped initrd-setup-root.service. Feb 12 19:44:14.317326 systemd[1]: Starting initrd-switch-root.service... Feb 12 19:44:14.331978 systemd[1]: Switching root. Feb 12 19:44:14.354663 systemd-journald[183]: Journal stopped Feb 12 19:44:19.069532 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 12 19:44:19.069564 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 19:44:19.069576 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 19:44:19.069587 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 19:44:19.069596 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 19:44:19.069606 kernel: SELinux: policy capability open_perms=1 Feb 12 19:44:19.069617 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 19:44:19.069628 kernel: SELinux: policy capability always_check_network=0 Feb 12 19:44:19.069637 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 19:44:19.069647 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 19:44:19.069655 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 19:44:19.069666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 19:44:19.069675 systemd[1]: Successfully loaded SELinux policy in 119.637ms. Feb 12 19:44:19.069687 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.351ms. Feb 12 19:44:19.069701 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 19:44:19.069712 systemd[1]: Detected virtualization microsoft. Feb 12 19:44:19.069724 systemd[1]: Detected architecture x86-64. Feb 12 19:44:19.069733 systemd[1]: Detected first boot. Feb 12 19:44:19.069746 systemd[1]: Hostname set to . Feb 12 19:44:19.069756 systemd[1]: Initializing machine ID from random generator. Feb 12 19:44:19.069768 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 19:44:19.069777 systemd[1]: Populated /etc with preset unit settings. Feb 12 19:44:19.069790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:44:19.069802 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:44:19.069814 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:44:19.069825 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 19:44:19.069836 kernel: audit: type=1334 audit(1707767058.601:91): prog-id=12 op=LOAD Feb 12 19:44:19.069847 kernel: audit: type=1334 audit(1707767058.601:92): prog-id=3 op=UNLOAD Feb 12 19:44:19.069856 kernel: audit: type=1334 audit(1707767058.608:93): prog-id=13 op=LOAD Feb 12 19:44:19.069864 kernel: audit: type=1334 audit(1707767058.613:94): prog-id=14 op=LOAD Feb 12 19:44:19.069875 kernel: audit: type=1334 audit(1707767058.613:95): prog-id=4 op=UNLOAD Feb 12 19:44:19.069885 kernel: audit: type=1334 audit(1707767058.613:96): prog-id=5 op=UNLOAD Feb 12 19:44:19.069895 kernel: audit: type=1334 audit(1707767058.618:97): prog-id=15 op=LOAD Feb 12 19:44:19.069905 kernel: audit: type=1334 audit(1707767058.618:98): prog-id=12 op=UNLOAD Feb 12 19:44:19.069916 kernel: audit: type=1334 audit(1707767058.636:99): prog-id=16 op=LOAD Feb 12 19:44:19.069925 kernel: audit: type=1334 audit(1707767058.646:100): prog-id=17 op=LOAD Feb 12 19:44:19.069936 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 19:44:19.069945 systemd[1]: Stopped initrd-switch-root.service. Feb 12 19:44:19.069958 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 19:44:19.070071 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 19:44:19.070089 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 19:44:19.070102 systemd[1]: Created slice system-getty.slice. Feb 12 19:44:19.070111 systemd[1]: Created slice system-modprobe.slice. Feb 12 19:44:19.070121 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 19:44:19.070131 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 19:44:19.070140 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 19:44:19.070149 systemd[1]: Created slice user.slice. Feb 12 19:44:19.070159 systemd[1]: Started systemd-ask-password-console.path. Feb 12 19:44:19.070169 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 19:44:19.070180 systemd[1]: Set up automount boot.automount. Feb 12 19:44:19.070192 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 19:44:19.070204 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 19:44:19.070214 systemd[1]: Stopped target initrd-fs.target. Feb 12 19:44:19.070226 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 19:44:19.070235 systemd[1]: Reached target integritysetup.target. Feb 12 19:44:19.070247 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 19:44:19.070259 systemd[1]: Reached target remote-fs.target. Feb 12 19:44:19.070271 systemd[1]: Reached target slices.target. Feb 12 19:44:19.070282 systemd[1]: Reached target swap.target. Feb 12 19:44:19.070292 systemd[1]: Reached target torcx.target. Feb 12 19:44:19.070305 systemd[1]: Reached target veritysetup.target. Feb 12 19:44:19.070315 systemd[1]: Listening on systemd-coredump.socket. Feb 12 19:44:19.070328 systemd[1]: Listening on systemd-initctl.socket. Feb 12 19:44:19.070339 systemd[1]: Listening on systemd-networkd.socket. Feb 12 19:44:19.070353 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 19:44:19.070365 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 19:44:19.070375 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 19:44:19.070387 systemd[1]: Mounting dev-hugepages.mount... Feb 12 19:44:19.070410 systemd[1]: Mounting dev-mqueue.mount... Feb 12 19:44:19.070421 systemd[1]: Mounting media.mount... Feb 12 19:44:19.070432 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:44:19.070446 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 19:44:19.070457 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 19:44:19.070469 systemd[1]: Mounting tmp.mount... Feb 12 19:44:19.070479 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 19:44:19.070491 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 19:44:19.070502 systemd[1]: Starting kmod-static-nodes.service... Feb 12 19:44:19.070513 systemd[1]: Starting modprobe@configfs.service... Feb 12 19:44:19.075172 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 19:44:19.075201 systemd[1]: Starting modprobe@drm.service... Feb 12 19:44:19.075219 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 19:44:19.075231 systemd[1]: Starting modprobe@fuse.service... Feb 12 19:44:19.075241 systemd[1]: Starting modprobe@loop.service... Feb 12 19:44:19.075254 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 19:44:19.075266 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 19:44:19.075276 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 19:44:19.075288 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 19:44:19.075299 kernel: fuse: init (API version 7.34) Feb 12 19:44:19.075311 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 19:44:19.075323 systemd[1]: Stopped systemd-journald.service. Feb 12 19:44:19.075335 kernel: loop: module loaded Feb 12 19:44:19.075347 systemd[1]: Starting systemd-journald.service... Feb 12 19:44:19.075357 systemd[1]: Starting systemd-modules-load.service... Feb 12 19:44:19.075367 systemd[1]: Starting systemd-network-generator.service... Feb 12 19:44:19.075379 systemd[1]: Starting systemd-remount-fs.service... Feb 12 19:44:19.075440 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 19:44:19.075453 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 19:44:19.075464 systemd[1]: Stopped verity-setup.service. Feb 12 19:44:19.075479 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 19:44:19.075490 systemd[1]: Mounted dev-hugepages.mount. Feb 12 19:44:19.075504 systemd-journald[1132]: Journal started Feb 12 19:44:19.075554 systemd-journald[1132]: Runtime Journal (/run/log/journal/85cc4cd41bc94af5a8f71017bc8bfb19) is 8.0M, max 159.0M, 151.0M free. Feb 12 19:44:14.939000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 19:44:15.144000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:44:15.149000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:44:15.149000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 19:44:15.149000 audit: BPF prog-id=10 op=LOAD Feb 12 19:44:15.149000 audit: BPF prog-id=10 op=UNLOAD Feb 12 19:44:15.149000 audit: BPF prog-id=11 op=LOAD Feb 12 19:44:15.149000 audit: BPF prog-id=11 op=UNLOAD Feb 12 19:44:15.503000 audit[1030]: AVC avc: denied { associate } for pid=1030 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 19:44:15.503000 audit[1030]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d89c a1=c0000cedf8 a2=c0000d7ac0 a3=32 items=0 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:15.503000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:44:15.511000 audit[1030]: AVC avc: denied { associate } for pid=1030 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 19:44:15.511000 audit[1030]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d975 a2=1ed a3=0 items=2 ppid=1013 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:15.511000 audit: CWD cwd="/" Feb 12 19:44:15.511000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:15.511000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:15.511000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 19:44:18.601000 audit: BPF prog-id=12 op=LOAD Feb 12 19:44:18.601000 audit: BPF prog-id=3 op=UNLOAD Feb 12 19:44:18.608000 audit: BPF prog-id=13 op=LOAD Feb 12 19:44:18.613000 audit: BPF prog-id=14 op=LOAD Feb 12 19:44:18.613000 audit: BPF prog-id=4 op=UNLOAD Feb 12 19:44:18.613000 audit: BPF prog-id=5 op=UNLOAD Feb 12 19:44:18.618000 audit: BPF prog-id=15 op=LOAD Feb 12 19:44:18.618000 audit: BPF prog-id=12 op=UNLOAD Feb 12 19:44:18.636000 audit: BPF prog-id=16 op=LOAD Feb 12 19:44:18.646000 audit: BPF prog-id=17 op=LOAD Feb 12 19:44:18.646000 audit: BPF prog-id=13 op=UNLOAD Feb 12 19:44:18.646000 audit: BPF prog-id=14 op=UNLOAD Feb 12 19:44:18.651000 audit: BPF prog-id=18 op=LOAD Feb 12 19:44:18.651000 audit: BPF prog-id=15 op=UNLOAD Feb 12 19:44:18.656000 audit: BPF prog-id=19 op=LOAD Feb 12 19:44:18.656000 audit: BPF prog-id=20 op=LOAD Feb 12 19:44:18.656000 audit: BPF prog-id=16 op=UNLOAD Feb 12 19:44:18.656000 audit: BPF prog-id=17 op=UNLOAD Feb 12 19:44:18.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:18.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:18.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:18.674000 audit: BPF prog-id=18 op=UNLOAD Feb 12 19:44:18.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.010000 audit: BPF prog-id=21 op=LOAD Feb 12 19:44:19.010000 audit: BPF prog-id=22 op=LOAD Feb 12 19:44:19.010000 audit: BPF prog-id=23 op=LOAD Feb 12 19:44:19.010000 audit: BPF prog-id=19 op=UNLOAD Feb 12 19:44:19.010000 audit: BPF prog-id=20 op=UNLOAD Feb 12 19:44:19.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.064000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 19:44:19.064000 audit[1132]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff6edc9320 a2=4000 a3=7fff6edc93bc items=0 ppid=1 pid=1132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:19.064000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 19:44:15.498271 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:44:18.600693 systemd[1]: Queued start job for default target multi-user.target. Feb 12 19:44:15.498873 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:44:18.657188 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 19:44:15.498894 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:44:15.498932 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 19:44:15.498944 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 19:44:15.498994 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 19:44:15.499009 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 19:44:15.499214 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 19:44:15.499261 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 19:44:15.499277 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 19:44:15.499692 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 19:44:15.499727 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 19:44:15.499748 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 19:44:15.499763 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 19:44:15.499781 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 19:44:15.499795 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 19:44:18.045437 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:44:18.045671 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:44:18.045793 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:44:18.045959 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 19:44:18.046005 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 19:44:18.046058 /usr/lib/systemd/system-generators/torcx-generator[1030]: time="2024-02-12T19:44:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 19:44:19.086716 systemd[1]: Started systemd-journald.service. Feb 12 19:44:19.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.087573 systemd[1]: Mounted dev-mqueue.mount. Feb 12 19:44:19.089713 systemd[1]: Mounted media.mount. Feb 12 19:44:19.091758 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 19:44:19.094128 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 19:44:19.096679 systemd[1]: Mounted tmp.mount. Feb 12 19:44:19.099086 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 19:44:19.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.101819 systemd[1]: Finished kmod-static-nodes.service. Feb 12 19:44:19.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.104613 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 19:44:19.104821 systemd[1]: Finished modprobe@configfs.service. Feb 12 19:44:19.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.107552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 19:44:19.107781 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 19:44:19.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.110490 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 19:44:19.110643 systemd[1]: Finished modprobe@drm.service. Feb 12 19:44:19.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.113417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 19:44:19.113562 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 19:44:19.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.116499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 19:44:19.116649 systemd[1]: Finished modprobe@fuse.service. Feb 12 19:44:19.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.119282 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 19:44:19.124493 systemd[1]: Finished modprobe@loop.service. Feb 12 19:44:19.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.126987 systemd[1]: Finished systemd-modules-load.service. Feb 12 19:44:19.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.129691 systemd[1]: Finished systemd-network-generator.service. Feb 12 19:44:19.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.132551 systemd[1]: Finished systemd-remount-fs.service. Feb 12 19:44:19.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.135592 systemd[1]: Reached target network-pre.target. Feb 12 19:44:19.139373 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 19:44:19.142761 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 19:44:19.148434 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 19:44:19.150331 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 19:44:19.155296 systemd[1]: Starting systemd-journal-flush.service... Feb 12 19:44:19.162497 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 19:44:19.163482 systemd[1]: Starting systemd-random-seed.service... Feb 12 19:44:19.165603 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 19:44:19.166785 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:44:19.170846 systemd[1]: Starting systemd-sysusers.service... Feb 12 19:44:19.177281 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 19:44:19.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.181472 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 19:44:19.184605 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 19:44:19.188820 systemd[1]: Starting systemd-udev-settle.service... Feb 12 19:44:19.192745 systemd-journald[1132]: Time spent on flushing to /var/log/journal/85cc4cd41bc94af5a8f71017bc8bfb19 is 32.728ms for 1200 entries. Feb 12 19:44:19.192745 systemd-journald[1132]: System Journal (/var/log/journal/85cc4cd41bc94af5a8f71017bc8bfb19) is 8.0M, max 2.6G, 2.6G free. Feb 12 19:44:19.257368 systemd-journald[1132]: Received client request to flush runtime journal. Feb 12 19:44:19.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.260288 udevadm[1154]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 19:44:19.204875 systemd[1]: Finished systemd-random-seed.service. Feb 12 19:44:19.207228 systemd[1]: Reached target first-boot-complete.target. Feb 12 19:44:19.222431 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:44:19.258484 systemd[1]: Finished systemd-journal-flush.service. Feb 12 19:44:19.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.352157 systemd[1]: Finished systemd-sysusers.service. Feb 12 19:44:19.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.766574 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 19:44:19.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.769000 audit: BPF prog-id=24 op=LOAD Feb 12 19:44:19.769000 audit: BPF prog-id=25 op=LOAD Feb 12 19:44:19.769000 audit: BPF prog-id=7 op=UNLOAD Feb 12 19:44:19.769000 audit: BPF prog-id=8 op=UNLOAD Feb 12 19:44:19.770684 systemd[1]: Starting systemd-udevd.service... Feb 12 19:44:19.788007 systemd-udevd[1157]: Using default interface naming scheme 'v252'. Feb 12 19:44:19.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.874000 audit: BPF prog-id=26 op=LOAD Feb 12 19:44:19.870379 systemd[1]: Started systemd-udevd.service. Feb 12 19:44:19.875482 systemd[1]: Starting systemd-networkd.service... Feb 12 19:44:19.901000 audit: BPF prog-id=27 op=LOAD Feb 12 19:44:19.901000 audit: BPF prog-id=28 op=LOAD Feb 12 19:44:19.901000 audit: BPF prog-id=29 op=LOAD Feb 12 19:44:19.902659 systemd[1]: Starting systemd-userdbd.service... Feb 12 19:44:19.920227 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 19:44:19.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:19.970031 systemd[1]: Started systemd-userdbd.service. Feb 12 19:44:19.978533 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 19:44:20.023421 kernel: hv_vmbus: registering driver hyperv_fb Feb 12 19:44:20.014000 audit[1159]: AVC avc: denied { confidentiality } for pid=1159 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 19:44:20.046233 kernel: hv_utils: Registering HyperV Utility Driver Feb 12 19:44:20.046313 kernel: hv_vmbus: registering driver hv_utils Feb 12 19:44:20.046356 kernel: hv_vmbus: registering driver hv_balloon Feb 12 19:44:20.065156 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 12 19:44:20.065245 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 12 19:44:20.065281 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 12 19:44:20.073355 kernel: Console: switching to colour dummy device 80x25 Feb 12 19:44:20.893516 kernel: hv_utils: Shutdown IC version 3.2 Feb 12 19:44:20.893577 kernel: hv_utils: TimeSync IC version 4.0 Feb 12 19:44:20.893602 kernel: hv_utils: Heartbeat IC version 3.0 Feb 12 19:44:20.900715 kernel: Console: switching to colour frame buffer device 128x48 Feb 12 19:44:20.014000 audit[1159]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56109673c190 a1=f884 a2=7ff102414bc5 a3=5 items=12 ppid=1157 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:20.014000 audit: CWD cwd="/" Feb 12 19:44:20.014000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=1 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=2 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=3 name=(null) inode=15327 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=4 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=5 name=(null) inode=15328 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=6 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=7 name=(null) inode=15329 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=8 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=9 name=(null) inode=15330 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=10 name=(null) inode=15326 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PATH item=11 name=(null) inode=15331 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 19:44:20.014000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 19:44:20.947011 systemd-networkd[1168]: lo: Link UP Feb 12 19:44:20.947448 systemd-networkd[1168]: lo: Gained carrier Feb 12 19:44:20.948652 systemd-networkd[1168]: Enumeration completed Feb 12 19:44:20.949489 systemd[1]: Started systemd-networkd.service. Feb 12 19:44:20.950146 systemd-networkd[1168]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:44:20.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:20.954035 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:44:20.984711 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1159) Feb 12 19:44:21.007710 kernel: mlx5_core 1e66:00:02.0 enP7782s1: Link up Feb 12 19:44:21.045715 kernel: hv_netvsc 000d3ab3-caa7-000d-3ab3-caa7000d3ab3 eth0: Data path switched to VF: enP7782s1 Feb 12 19:44:21.050869 systemd-networkd[1168]: enP7782s1: Link UP Feb 12 19:44:21.051441 systemd-networkd[1168]: eth0: Link UP Feb 12 19:44:21.051453 systemd-networkd[1168]: eth0: Gained carrier Feb 12 19:44:21.056342 systemd-networkd[1168]: enP7782s1: Gained carrier Feb 12 19:44:21.060931 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 19:44:21.085829 systemd-networkd[1168]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:44:21.163714 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 12 19:44:21.183078 systemd[1]: Finished systemd-udev-settle.service. Feb 12 19:44:21.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:21.186915 systemd[1]: Starting lvm2-activation-early.service... Feb 12 19:44:21.264731 lvm[1235]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:44:21.287649 systemd[1]: Finished lvm2-activation-early.service. Feb 12 19:44:21.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:21.290369 systemd[1]: Reached target cryptsetup.target. Feb 12 19:44:21.294083 systemd[1]: Starting lvm2-activation.service... Feb 12 19:44:21.298605 lvm[1236]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 19:44:21.322574 systemd[1]: Finished lvm2-activation.service. Feb 12 19:44:21.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:21.325260 systemd[1]: Reached target local-fs-pre.target. Feb 12 19:44:21.327836 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 19:44:21.327880 systemd[1]: Reached target local-fs.target. Feb 12 19:44:21.330293 systemd[1]: Reached target machines.target. Feb 12 19:44:21.333795 systemd[1]: Starting ldconfig.service... Feb 12 19:44:21.335946 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 19:44:21.336049 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:44:21.337171 systemd[1]: Starting systemd-boot-update.service... Feb 12 19:44:21.340483 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 19:44:21.344545 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 19:44:21.347204 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:44:21.347314 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 19:44:21.348745 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 19:44:21.362422 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1238 (bootctl) Feb 12 19:44:21.363768 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 19:44:22.052272 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 19:44:22.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:22.167839 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 19:44:22.351681 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 19:44:22.844939 systemd-networkd[1168]: eth0: Gained IPv6LL Feb 12 19:44:22.850515 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:44:22.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.005334 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 19:44:23.075426 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 19:44:23.076101 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 19:44:23.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.201529 systemd-fsck[1246]: fsck.fat 4.2 (2021-01-31) Feb 12 19:44:23.201529 systemd-fsck[1246]: /dev/sda1: 789 files, 115339/258078 clusters Feb 12 19:44:23.201979 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 19:44:23.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.207370 systemd[1]: Mounting boot.mount... Feb 12 19:44:23.222673 systemd[1]: Mounted boot.mount. Feb 12 19:44:23.236563 systemd[1]: Finished systemd-boot-update.service. Feb 12 19:44:23.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.329104 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 19:44:23.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.333403 systemd[1]: Starting audit-rules.service... Feb 12 19:44:23.336733 systemd[1]: Starting clean-ca-certificates.service... Feb 12 19:44:23.340626 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 19:44:23.343000 audit: BPF prog-id=30 op=LOAD Feb 12 19:44:23.347000 audit: BPF prog-id=31 op=LOAD Feb 12 19:44:23.345412 systemd[1]: Starting systemd-resolved.service... Feb 12 19:44:23.349774 systemd[1]: Starting systemd-timesyncd.service... Feb 12 19:44:23.353958 systemd[1]: Starting systemd-update-utmp.service... Feb 12 19:44:23.369000 audit[1258]: SYSTEM_BOOT pid=1258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.373270 systemd[1]: Finished systemd-update-utmp.service. Feb 12 19:44:23.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.386320 systemd[1]: Finished clean-ca-certificates.service. Feb 12 19:44:23.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.388960 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 19:44:23.416423 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 19:44:23.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.461551 systemd[1]: Started systemd-timesyncd.service. Feb 12 19:44:23.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 19:44:23.464406 systemd[1]: Reached target time-set.target. Feb 12 19:44:23.487000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 19:44:23.487000 audit[1273]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdcf94ece0 a2=420 a3=0 items=0 ppid=1252 pid=1273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 19:44:23.487000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 19:44:23.488368 augenrules[1273]: No rules Feb 12 19:44:23.488976 systemd[1]: Finished audit-rules.service. Feb 12 19:44:23.490636 systemd-resolved[1256]: Positive Trust Anchors: Feb 12 19:44:23.490907 systemd-resolved[1256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 19:44:23.490991 systemd-resolved[1256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 19:44:23.520867 systemd-resolved[1256]: Using system hostname 'ci-3510.3.2-a-e615f4b643'. Feb 12 19:44:23.522313 systemd[1]: Started systemd-resolved.service. Feb 12 19:44:23.524980 systemd[1]: Reached target network.target. Feb 12 19:44:23.527213 systemd[1]: Reached target network-online.target. Feb 12 19:44:23.528503 systemd-timesyncd[1257]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). Feb 12 19:44:23.528817 systemd-timesyncd[1257]: Initial clock synchronization to Mon 2024-02-12 19:44:23.530625 UTC. Feb 12 19:44:23.529642 systemd[1]: Reached target nss-lookup.target. Feb 12 19:44:25.003699 ldconfig[1237]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 19:44:25.015310 systemd[1]: Finished ldconfig.service. Feb 12 19:44:25.019203 systemd[1]: Starting systemd-update-done.service... Feb 12 19:44:25.029610 systemd[1]: Finished systemd-update-done.service. Feb 12 19:44:25.032241 systemd[1]: Reached target sysinit.target. Feb 12 19:44:25.034362 systemd[1]: Started motdgen.path. Feb 12 19:44:25.036139 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 19:44:25.039178 systemd[1]: Started logrotate.timer. Feb 12 19:44:25.041123 systemd[1]: Started mdadm.timer. Feb 12 19:44:25.042824 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 19:44:25.044980 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 19:44:25.045004 systemd[1]: Reached target paths.target. Feb 12 19:44:25.047060 systemd[1]: Reached target timers.target. Feb 12 19:44:25.049370 systemd[1]: Listening on dbus.socket. Feb 12 19:44:25.052310 systemd[1]: Starting docker.socket... Feb 12 19:44:25.060027 systemd[1]: Listening on sshd.socket. Feb 12 19:44:25.062184 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:44:25.062638 systemd[1]: Listening on docker.socket. Feb 12 19:44:25.065638 systemd[1]: Reached target sockets.target. Feb 12 19:44:25.067708 systemd[1]: Reached target basic.target. Feb 12 19:44:25.071034 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:44:25.071064 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 19:44:25.072133 systemd[1]: Starting containerd.service... Feb 12 19:44:25.075805 systemd[1]: Starting dbus.service... Feb 12 19:44:25.078475 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 19:44:25.081676 systemd[1]: Starting extend-filesystems.service... Feb 12 19:44:25.084150 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 19:44:25.085537 systemd[1]: Starting motdgen.service... Feb 12 19:44:25.089430 systemd[1]: Started nvidia.service. Feb 12 19:44:25.093451 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 19:44:25.098541 systemd[1]: Starting prepare-critools.service... Feb 12 19:44:25.103210 systemd[1]: Starting prepare-helm.service... Feb 12 19:44:25.105519 jq[1283]: false Feb 12 19:44:25.106921 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 19:44:25.113365 systemd[1]: Starting sshd-keygen.service... Feb 12 19:44:25.119109 systemd[1]: Starting systemd-logind.service... Feb 12 19:44:25.123790 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 19:44:25.123887 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 19:44:25.124396 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 19:44:25.125249 systemd[1]: Starting update-engine.service... Feb 12 19:44:25.128622 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 19:44:25.132002 extend-filesystems[1284]: Found sda Feb 12 19:44:25.135640 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 19:44:25.135862 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 19:44:25.138190 extend-filesystems[1284]: Found sda1 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda2 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda3 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found usr Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda4 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda6 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda7 Feb 12 19:44:25.140152 extend-filesystems[1284]: Found sda9 Feb 12 19:44:25.140152 extend-filesystems[1284]: Checking size of /dev/sda9 Feb 12 19:44:25.148357 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 19:44:25.280591 extend-filesystems[1284]: Old size kept for /dev/sda9 Feb 12 19:44:25.280591 extend-filesystems[1284]: Found sr0 Feb 12 19:44:25.169707 dbus-daemon[1282]: [system] SELinux support is enabled Feb 12 19:44:25.148582 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 19:44:25.297780 jq[1305]: true Feb 12 19:44:25.169870 systemd[1]: Started dbus.service. Feb 12 19:44:25.309208 tar[1307]: ./ Feb 12 19:44:25.309208 tar[1307]: ./loopback Feb 12 19:44:25.178409 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 19:44:25.310534 tar[1310]: crictl Feb 12 19:44:25.178444 systemd[1]: Reached target system-config.target. Feb 12 19:44:25.310922 tar[1312]: linux-amd64/helm Feb 12 19:44:25.181162 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 19:44:25.181194 systemd[1]: Reached target user-config.target. Feb 12 19:44:25.311437 jq[1316]: true Feb 12 19:44:25.185543 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 19:44:25.185756 systemd[1]: Finished motdgen.service. Feb 12 19:44:25.228670 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 19:44:25.228882 systemd[1]: Finished extend-filesystems.service. Feb 12 19:44:25.332629 bash[1344]: Updated "/home/core/.ssh/authorized_keys" Feb 12 19:44:25.332989 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 19:44:25.350849 tar[1307]: ./bandwidth Feb 12 19:44:25.376917 env[1314]: time="2024-02-12T19:44:25.376870238Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 19:44:25.394671 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 19:44:25.481402 update_engine[1301]: I0212 19:44:25.480554 1301 main.cc:92] Flatcar Update Engine starting Feb 12 19:44:25.482606 systemd-logind[1297]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 19:44:25.490912 systemd-logind[1297]: New seat seat0. Feb 12 19:44:25.493286 tar[1307]: ./ptp Feb 12 19:44:25.494259 systemd[1]: Started systemd-logind.service. Feb 12 19:44:25.506853 update_engine[1301]: I0212 19:44:25.495752 1301 update_check_scheduler.cc:74] Next update check in 7m34s Feb 12 19:44:25.497293 systemd[1]: Started update-engine.service. Feb 12 19:44:25.501967 systemd[1]: Started locksmithd.service. Feb 12 19:44:25.536862 env[1314]: time="2024-02-12T19:44:25.536773667Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 19:44:25.537371 env[1314]: time="2024-02-12T19:44:25.537339640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.541820 env[1314]: time="2024-02-12T19:44:25.541781913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:44:25.542726 env[1314]: time="2024-02-12T19:44:25.542684329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.543110 env[1314]: time="2024-02-12T19:44:25.543081081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:44:25.543234 env[1314]: time="2024-02-12T19:44:25.543215598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.543315 env[1314]: time="2024-02-12T19:44:25.543296708Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 19:44:25.543381 env[1314]: time="2024-02-12T19:44:25.543367318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.543553 env[1314]: time="2024-02-12T19:44:25.543531839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.546474 env[1314]: time="2024-02-12T19:44:25.546450915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 19:44:25.546769 env[1314]: time="2024-02-12T19:44:25.546744153Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 19:44:25.546858 env[1314]: time="2024-02-12T19:44:25.546843166Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 19:44:25.546982 env[1314]: time="2024-02-12T19:44:25.546964482Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 19:44:25.547057 env[1314]: time="2024-02-12T19:44:25.547044592Z" level=info msg="metadata content store policy set" policy=shared Feb 12 19:44:25.562453 env[1314]: time="2024-02-12T19:44:25.562423576Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 19:44:25.562543 env[1314]: time="2024-02-12T19:44:25.562464881Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 19:44:25.562543 env[1314]: time="2024-02-12T19:44:25.562483584Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 19:44:25.562543 env[1314]: time="2024-02-12T19:44:25.562533790Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562659 env[1314]: time="2024-02-12T19:44:25.562553793Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562659 env[1314]: time="2024-02-12T19:44:25.562572695Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562659 env[1314]: time="2024-02-12T19:44:25.562636103Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562794 env[1314]: time="2024-02-12T19:44:25.562656306Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562794 env[1314]: time="2024-02-12T19:44:25.562675509Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562794 env[1314]: time="2024-02-12T19:44:25.562718314Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562794 env[1314]: time="2024-02-12T19:44:25.562738417Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.562794 env[1314]: time="2024-02-12T19:44:25.562756019Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 19:44:25.562966 env[1314]: time="2024-02-12T19:44:25.562865833Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 19:44:25.562966 env[1314]: time="2024-02-12T19:44:25.562958445Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 19:44:25.563504 env[1314]: time="2024-02-12T19:44:25.563378399Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 19:44:25.563504 env[1314]: time="2024-02-12T19:44:25.563418704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563504 env[1314]: time="2024-02-12T19:44:25.563439107Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 19:44:25.563653 env[1314]: time="2024-02-12T19:44:25.563633432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563745 env[1314]: time="2024-02-12T19:44:25.563658935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563799 env[1314]: time="2024-02-12T19:44:25.563757648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563799 env[1314]: time="2024-02-12T19:44:25.563780151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563877 env[1314]: time="2024-02-12T19:44:25.563798553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563877 env[1314]: time="2024-02-12T19:44:25.563817156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563877 env[1314]: time="2024-02-12T19:44:25.563834558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.563877 env[1314]: time="2024-02-12T19:44:25.563855061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.564023 env[1314]: time="2024-02-12T19:44:25.563876163Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 19:44:25.564068 env[1314]: time="2024-02-12T19:44:25.564023582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.564068 env[1314]: time="2024-02-12T19:44:25.564044385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.564139 env[1314]: time="2024-02-12T19:44:25.564065088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.564139 env[1314]: time="2024-02-12T19:44:25.564082190Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 19:44:25.564139 env[1314]: time="2024-02-12T19:44:25.564103093Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 19:44:25.564139 env[1314]: time="2024-02-12T19:44:25.564117995Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 19:44:25.564275 env[1314]: time="2024-02-12T19:44:25.564143998Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 19:44:25.564275 env[1314]: time="2024-02-12T19:44:25.564186703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 19:44:25.564521 env[1314]: time="2024-02-12T19:44:25.564449337Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.564536949Z" level=info msg="Connect containerd service" Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.564580854Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.565282845Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.565569382Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.565616588Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 19:44:25.576940 env[1314]: time="2024-02-12T19:44:25.565669095Z" level=info msg="containerd successfully booted in 0.216095s" Feb 12 19:44:25.565772 systemd[1]: Started containerd.service. Feb 12 19:44:25.615606 env[1314]: time="2024-02-12T19:44:25.586660903Z" level=info msg="Start subscribing containerd event" Feb 12 19:44:25.624395 env[1314]: time="2024-02-12T19:44:25.624361967Z" level=info msg="Start recovering state" Feb 12 19:44:25.624612 env[1314]: time="2024-02-12T19:44:25.624595497Z" level=info msg="Start event monitor" Feb 12 19:44:25.624719 env[1314]: time="2024-02-12T19:44:25.624705811Z" level=info msg="Start snapshots syncer" Feb 12 19:44:25.624851 env[1314]: time="2024-02-12T19:44:25.624836228Z" level=info msg="Start cni network conf syncer for default" Feb 12 19:44:25.624978 env[1314]: time="2024-02-12T19:44:25.624965945Z" level=info msg="Start streaming server" Feb 12 19:44:25.646024 tar[1307]: ./vlan Feb 12 19:44:25.731831 tar[1307]: ./host-device Feb 12 19:44:25.817642 tar[1307]: ./tuning Feb 12 19:44:25.888676 tar[1307]: ./vrf Feb 12 19:44:25.970680 tar[1307]: ./sbr Feb 12 19:44:26.050848 tar[1307]: ./tap Feb 12 19:44:26.141767 tar[1307]: ./dhcp Feb 12 19:44:26.367073 tar[1312]: linux-amd64/LICENSE Feb 12 19:44:26.367561 tar[1312]: linux-amd64/README.md Feb 12 19:44:26.374901 tar[1307]: ./static Feb 12 19:44:26.379680 systemd[1]: Finished prepare-helm.service. Feb 12 19:44:26.411574 tar[1307]: ./firewall Feb 12 19:44:26.461133 tar[1307]: ./macvlan Feb 12 19:44:26.475543 systemd[1]: Finished prepare-critools.service. Feb 12 19:44:26.512826 tar[1307]: ./dummy Feb 12 19:44:26.558770 tar[1307]: ./bridge Feb 12 19:44:26.606889 tar[1307]: ./ipvlan Feb 12 19:44:26.651294 tar[1307]: ./portmap Feb 12 19:44:26.693325 tar[1307]: ./host-local Feb 12 19:44:26.754634 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 19:44:27.086271 locksmithd[1375]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 19:44:27.994151 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 19:44:28.014070 systemd[1]: Finished sshd-keygen.service. Feb 12 19:44:28.018230 systemd[1]: Starting issuegen.service... Feb 12 19:44:28.021883 systemd[1]: Started waagent.service. Feb 12 19:44:28.025504 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 19:44:28.026032 systemd[1]: Finished issuegen.service. Feb 12 19:44:28.029445 systemd[1]: Starting systemd-user-sessions.service... Feb 12 19:44:28.041072 systemd[1]: Finished systemd-user-sessions.service. Feb 12 19:44:28.049824 systemd[1]: Started getty@tty1.service. Feb 12 19:44:28.055761 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 19:44:28.058336 systemd[1]: Reached target getty.target. Feb 12 19:44:28.060453 systemd[1]: Reached target multi-user.target. Feb 12 19:44:28.074072 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 19:44:28.080827 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 19:44:28.081003 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 19:44:28.086331 systemd[1]: Startup finished in 688ms (firmware) + 7.493s (loader) + 939ms (kernel) + 11.058s (initrd) + 12.506s (userspace) = 32.686s. Feb 12 19:44:28.197780 login[1402]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 12 19:44:28.203275 login[1401]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:44:28.218319 systemd[1]: Created slice user-500.slice. Feb 12 19:44:28.219651 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 19:44:28.225018 systemd-logind[1297]: New session 2 of user core. Feb 12 19:44:28.230218 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 19:44:28.232038 systemd[1]: Starting user@500.service... Feb 12 19:44:28.238797 (systemd)[1405]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:44:28.344454 systemd[1405]: Queued start job for default target default.target. Feb 12 19:44:28.345051 systemd[1405]: Reached target paths.target. Feb 12 19:44:28.345078 systemd[1405]: Reached target sockets.target. Feb 12 19:44:28.345096 systemd[1405]: Reached target timers.target. Feb 12 19:44:28.345110 systemd[1405]: Reached target basic.target. Feb 12 19:44:28.345160 systemd[1405]: Reached target default.target. Feb 12 19:44:28.345197 systemd[1405]: Startup finished in 100ms. Feb 12 19:44:28.345235 systemd[1]: Started user@500.service. Feb 12 19:44:28.346526 systemd[1]: Started session-2.scope. Feb 12 19:44:29.199427 login[1402]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 12 19:44:29.204337 systemd-logind[1297]: New session 1 of user core. Feb 12 19:44:29.204899 systemd[1]: Started session-1.scope. Feb 12 19:44:30.190062 waagent[1396]: 2024-02-12T19:44:30.189950Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 12 19:44:30.203771 waagent[1396]: 2024-02-12T19:44:30.191331Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 12 19:44:30.203771 waagent[1396]: 2024-02-12T19:44:30.192455Z INFO Daemon Daemon Python: 3.9.16 Feb 12 19:44:30.203771 waagent[1396]: 2024-02-12T19:44:30.193711Z INFO Daemon Daemon Run daemon Feb 12 19:44:30.203771 waagent[1396]: 2024-02-12T19:44:30.194634Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 12 19:44:30.208475 waagent[1396]: 2024-02-12T19:44:30.208356Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:44:30.215851 waagent[1396]: 2024-02-12T19:44:30.215742Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:44:30.220911 waagent[1396]: 2024-02-12T19:44:30.220847Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:44:30.223755 waagent[1396]: 2024-02-12T19:44:30.223679Z INFO Daemon Daemon Using waagent for provisioning Feb 12 19:44:30.227233 waagent[1396]: 2024-02-12T19:44:30.227172Z INFO Daemon Daemon Activate resource disk Feb 12 19:44:30.229938 waagent[1396]: 2024-02-12T19:44:30.229877Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 12 19:44:30.240465 waagent[1396]: 2024-02-12T19:44:30.240398Z INFO Daemon Daemon Found device: None Feb 12 19:44:30.243179 waagent[1396]: 2024-02-12T19:44:30.243118Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 12 19:44:30.247795 waagent[1396]: 2024-02-12T19:44:30.247739Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 12 19:44:30.254626 waagent[1396]: 2024-02-12T19:44:30.254561Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:44:30.259966 waagent[1396]: 2024-02-12T19:44:30.254949Z INFO Daemon Daemon Running default provisioning handler Feb 12 19:44:30.263940 waagent[1396]: 2024-02-12T19:44:30.263815Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 12 19:44:30.271212 waagent[1396]: 2024-02-12T19:44:30.271107Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 12 19:44:30.276392 waagent[1396]: 2024-02-12T19:44:30.276329Z INFO Daemon Daemon cloud-init is enabled: False Feb 12 19:44:30.280865 waagent[1396]: 2024-02-12T19:44:30.280802Z INFO Daemon Daemon Copying ovf-env.xml Feb 12 19:44:30.320192 waagent[1396]: 2024-02-12T19:44:30.320016Z INFO Daemon Daemon Successfully mounted dvd Feb 12 19:44:30.354161 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 12 19:44:30.363653 waagent[1396]: 2024-02-12T19:44:30.363535Z INFO Daemon Daemon Detect protocol endpoint Feb 12 19:44:30.367118 waagent[1396]: 2024-02-12T19:44:30.367048Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 12 19:44:30.370822 waagent[1396]: 2024-02-12T19:44:30.370758Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 12 19:44:30.374643 waagent[1396]: 2024-02-12T19:44:30.374579Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 12 19:44:30.377792 waagent[1396]: 2024-02-12T19:44:30.377732Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 12 19:44:30.380725 waagent[1396]: 2024-02-12T19:44:30.380653Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 12 19:44:30.419181 waagent[1396]: 2024-02-12T19:44:30.419110Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 12 19:44:30.427558 waagent[1396]: 2024-02-12T19:44:30.419931Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 12 19:44:30.427558 waagent[1396]: 2024-02-12T19:44:30.421015Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 12 19:44:30.599409 waagent[1396]: 2024-02-12T19:44:30.599180Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 12 19:44:30.611511 waagent[1396]: 2024-02-12T19:44:30.611424Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 12 19:44:30.615952 waagent[1396]: 2024-02-12T19:44:30.615881Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 12 19:44:31.631761 waagent[1396]: 2024-02-12T19:44:31.631612Z INFO Daemon Daemon Found private key matching thumbprint 566BDB128F524E8D13FC95AF1BFB45860B6BFDDC Feb 12 19:44:31.643028 waagent[1396]: 2024-02-12T19:44:31.632145Z INFO Daemon Daemon Certificate with thumbprint 19E55936F66EA01E1C66184DC22C00C6894A2E23 has no matching private key. Feb 12 19:44:31.643028 waagent[1396]: 2024-02-12T19:44:31.633792Z INFO Daemon Daemon Fetch goal state completed Feb 12 19:44:31.662920 waagent[1396]: 2024-02-12T19:44:31.662848Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 42b76f42-4d32-4218-ac76-874779e3d776 New eTag: 13630789155509660168] Feb 12 19:44:31.671250 waagent[1396]: 2024-02-12T19:44:31.663658Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:44:31.673146 waagent[1396]: 2024-02-12T19:44:31.673089Z INFO Daemon Daemon Starting provisioning Feb 12 19:44:31.680089 waagent[1396]: 2024-02-12T19:44:31.673389Z INFO Daemon Daemon Handle ovf-env.xml. Feb 12 19:44:31.680089 waagent[1396]: 2024-02-12T19:44:31.674353Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-e615f4b643] Feb 12 19:44:31.719114 waagent[1396]: 2024-02-12T19:44:31.718969Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-e615f4b643] Feb 12 19:44:31.727734 waagent[1396]: 2024-02-12T19:44:31.719886Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 12 19:44:31.727734 waagent[1396]: 2024-02-12T19:44:31.720592Z INFO Daemon Daemon Primary interface is [eth0] Feb 12 19:44:31.734534 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 12 19:44:31.734803 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 12 19:44:31.734863 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 12 19:44:31.735173 systemd[1]: Stopping systemd-networkd.service... Feb 12 19:44:31.738736 systemd-networkd[1168]: eth0: DHCPv6 lease lost Feb 12 19:44:31.740082 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 19:44:31.740242 systemd[1]: Stopped systemd-networkd.service. Feb 12 19:44:31.742546 systemd[1]: Starting systemd-networkd.service... Feb 12 19:44:31.773942 systemd-networkd[1448]: enP7782s1: Link UP Feb 12 19:44:31.773952 systemd-networkd[1448]: enP7782s1: Gained carrier Feb 12 19:44:31.775262 systemd-networkd[1448]: eth0: Link UP Feb 12 19:44:31.775270 systemd-networkd[1448]: eth0: Gained carrier Feb 12 19:44:31.775712 systemd-networkd[1448]: lo: Link UP Feb 12 19:44:31.775721 systemd-networkd[1448]: lo: Gained carrier Feb 12 19:44:31.776040 systemd-networkd[1448]: eth0: Gained IPv6LL Feb 12 19:44:31.776311 systemd-networkd[1448]: Enumeration completed Feb 12 19:44:31.780569 waagent[1396]: 2024-02-12T19:44:31.777659Z INFO Daemon Daemon Create user account if not exists Feb 12 19:44:31.780569 waagent[1396]: 2024-02-12T19:44:31.778307Z INFO Daemon Daemon User core already exists, skip useradd Feb 12 19:44:31.780569 waagent[1396]: 2024-02-12T19:44:31.779267Z INFO Daemon Daemon Configure sudoer Feb 12 19:44:31.776407 systemd[1]: Started systemd-networkd.service. Feb 12 19:44:31.785592 systemd-networkd[1448]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 19:44:31.786979 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 19:44:31.797854 waagent[1396]: 2024-02-12T19:44:31.797767Z INFO Daemon Daemon Configure sshd Feb 12 19:44:31.802379 waagent[1396]: 2024-02-12T19:44:31.798108Z INFO Daemon Daemon Deploy ssh public key. Feb 12 19:44:31.818793 systemd-networkd[1448]: eth0: DHCPv4 address 10.200.8.24/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 12 19:44:31.822563 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 19:44:33.098845 waagent[1396]: 2024-02-12T19:44:33.098743Z INFO Daemon Daemon Provisioning complete Feb 12 19:44:33.114623 waagent[1396]: 2024-02-12T19:44:33.114545Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 12 19:44:33.122029 waagent[1396]: 2024-02-12T19:44:33.115038Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 12 19:44:33.122029 waagent[1396]: 2024-02-12T19:44:33.116779Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 12 19:44:33.384139 waagent[1457]: 2024-02-12T19:44:33.383971Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 12 19:44:33.384860 waagent[1457]: 2024-02-12T19:44:33.384794Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:33.385016 waagent[1457]: 2024-02-12T19:44:33.384962Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:33.395758 waagent[1457]: 2024-02-12T19:44:33.395676Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 12 19:44:33.395925 waagent[1457]: 2024-02-12T19:44:33.395871Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 12 19:44:33.454804 waagent[1457]: 2024-02-12T19:44:33.454647Z INFO ExtHandler ExtHandler Found private key matching thumbprint 566BDB128F524E8D13FC95AF1BFB45860B6BFDDC Feb 12 19:44:33.455035 waagent[1457]: 2024-02-12T19:44:33.454974Z INFO ExtHandler ExtHandler Certificate with thumbprint 19E55936F66EA01E1C66184DC22C00C6894A2E23 has no matching private key. Feb 12 19:44:33.455273 waagent[1457]: 2024-02-12T19:44:33.455222Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 12 19:44:33.468683 waagent[1457]: 2024-02-12T19:44:33.468620Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 1ead68ab-8ac5-48b6-aeb7-7ff7e9c64237 New eTag: 13630789155509660168] Feb 12 19:44:33.469368 waagent[1457]: 2024-02-12T19:44:33.469304Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 12 19:44:33.523352 waagent[1457]: 2024-02-12T19:44:33.523235Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:44:33.540604 waagent[1457]: 2024-02-12T19:44:33.532663Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1457 Feb 12 19:44:33.540604 waagent[1457]: 2024-02-12T19:44:33.537069Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:44:33.540604 waagent[1457]: 2024-02-12T19:44:33.538597Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:44:33.566951 waagent[1457]: 2024-02-12T19:44:33.566891Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:44:33.567328 waagent[1457]: 2024-02-12T19:44:33.567265Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:44:33.575242 waagent[1457]: 2024-02-12T19:44:33.575198Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:44:33.575758 waagent[1457]: 2024-02-12T19:44:33.575709Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:44:33.576835 waagent[1457]: 2024-02-12T19:44:33.576780Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 12 19:44:33.578130 waagent[1457]: 2024-02-12T19:44:33.578070Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:44:33.578460 waagent[1457]: 2024-02-12T19:44:33.578387Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:33.579244 waagent[1457]: 2024-02-12T19:44:33.579187Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:44:33.579354 waagent[1457]: 2024-02-12T19:44:33.579296Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:33.579647 waagent[1457]: 2024-02-12T19:44:33.579596Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:33.579756 waagent[1457]: 2024-02-12T19:44:33.579679Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:33.580582 waagent[1457]: 2024-02-12T19:44:33.580528Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:44:33.580961 waagent[1457]: 2024-02-12T19:44:33.580907Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:44:33.581457 waagent[1457]: 2024-02-12T19:44:33.581402Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:44:33.581649 waagent[1457]: 2024-02-12T19:44:33.581596Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:44:33.581649 waagent[1457]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:44:33.581649 waagent[1457]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:44:33.581649 waagent[1457]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:44:33.581649 waagent[1457]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:33.581649 waagent[1457]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:33.581649 waagent[1457]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:33.581954 waagent[1457]: 2024-02-12T19:44:33.581785Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:44:33.582636 waagent[1457]: 2024-02-12T19:44:33.582574Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:44:33.584778 waagent[1457]: 2024-02-12T19:44:33.584513Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:44:33.586340 waagent[1457]: 2024-02-12T19:44:33.586264Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:44:33.586523 waagent[1457]: 2024-02-12T19:44:33.586456Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:44:33.586843 waagent[1457]: 2024-02-12T19:44:33.586787Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:44:33.597393 waagent[1457]: 2024-02-12T19:44:33.597343Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 12 19:44:33.598075 waagent[1457]: 2024-02-12T19:44:33.598035Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:44:33.598935 waagent[1457]: 2024-02-12T19:44:33.598887Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 12 19:44:33.614329 waagent[1457]: 2024-02-12T19:44:33.614261Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1448' Feb 12 19:44:33.634508 waagent[1457]: 2024-02-12T19:44:33.633095Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:44:33.634508 waagent[1457]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:44:33.634508 waagent[1457]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:44:33.634508 waagent[1457]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:ca:a7 brd ff:ff:ff:ff:ff:ff Feb 12 19:44:33.634508 waagent[1457]: 3: enP7782s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:ca:a7 brd ff:ff:ff:ff:ff:ff\ altname enP7782p0s2 Feb 12 19:44:33.634508 waagent[1457]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:44:33.634508 waagent[1457]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:44:33.634508 waagent[1457]: 2: eth0 inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:44:33.634508 waagent[1457]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:44:33.634508 waagent[1457]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:44:33.634508 waagent[1457]: 2: eth0 inet6 fe80::20d:3aff:feb3:caa7/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:44:33.641917 waagent[1457]: 2024-02-12T19:44:33.641853Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 12 19:44:33.796033 waagent[1457]: 2024-02-12T19:44:33.795939Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 12 19:44:33.799382 waagent[1457]: 2024-02-12T19:44:33.799275Z INFO EnvHandler ExtHandler Firewall rules: Feb 12 19:44:33.799382 waagent[1457]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:33.799382 waagent[1457]: pkts bytes target prot opt in out source destination Feb 12 19:44:33.799382 waagent[1457]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:33.799382 waagent[1457]: pkts bytes target prot opt in out source destination Feb 12 19:44:33.799382 waagent[1457]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:33.799382 waagent[1457]: pkts bytes target prot opt in out source destination Feb 12 19:44:33.799382 waagent[1457]: 13 676 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:44:33.799382 waagent[1457]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:44:33.800754 waagent[1457]: 2024-02-12T19:44:33.800669Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 12 19:44:33.920457 waagent[1457]: 2024-02-12T19:44:33.920330Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 12 19:44:34.120679 waagent[1396]: 2024-02-12T19:44:34.120506Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 12 19:44:34.126912 waagent[1396]: 2024-02-12T19:44:34.126847Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 12 19:44:35.102131 waagent[1499]: 2024-02-12T19:44:35.102017Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 12 19:44:35.102841 waagent[1499]: 2024-02-12T19:44:35.102772Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 12 19:44:35.102993 waagent[1499]: 2024-02-12T19:44:35.102937Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 12 19:44:35.112477 waagent[1499]: 2024-02-12T19:44:35.112375Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 12 19:44:35.112882 waagent[1499]: 2024-02-12T19:44:35.112824Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:35.113052 waagent[1499]: 2024-02-12T19:44:35.113001Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:35.124207 waagent[1499]: 2024-02-12T19:44:35.124131Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 12 19:44:35.132137 waagent[1499]: 2024-02-12T19:44:35.132069Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 12 19:44:35.133098 waagent[1499]: 2024-02-12T19:44:35.133033Z INFO ExtHandler Feb 12 19:44:35.133248 waagent[1499]: 2024-02-12T19:44:35.133194Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 761febb7-a359-4ba4-ba7e-21a96987c169 eTag: 13630789155509660168 source: Fabric] Feb 12 19:44:35.133969 waagent[1499]: 2024-02-12T19:44:35.133907Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 12 19:44:35.135069 waagent[1499]: 2024-02-12T19:44:35.135005Z INFO ExtHandler Feb 12 19:44:35.135205 waagent[1499]: 2024-02-12T19:44:35.135152Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 12 19:44:35.141519 waagent[1499]: 2024-02-12T19:44:35.141466Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 12 19:44:35.141971 waagent[1499]: 2024-02-12T19:44:35.141922Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 12 19:44:35.161994 waagent[1499]: 2024-02-12T19:44:35.161915Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 12 19:44:35.227460 waagent[1499]: 2024-02-12T19:44:35.227323Z INFO ExtHandler Downloaded certificate {'thumbprint': '566BDB128F524E8D13FC95AF1BFB45860B6BFDDC', 'hasPrivateKey': True} Feb 12 19:44:35.228454 waagent[1499]: 2024-02-12T19:44:35.228386Z INFO ExtHandler Downloaded certificate {'thumbprint': '19E55936F66EA01E1C66184DC22C00C6894A2E23', 'hasPrivateKey': False} Feb 12 19:44:35.229427 waagent[1499]: 2024-02-12T19:44:35.229365Z INFO ExtHandler Fetch goal state completed Feb 12 19:44:35.251254 waagent[1499]: 2024-02-12T19:44:35.251181Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1499 Feb 12 19:44:35.254456 waagent[1499]: 2024-02-12T19:44:35.254391Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 12 19:44:35.255886 waagent[1499]: 2024-02-12T19:44:35.255826Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 12 19:44:35.260582 waagent[1499]: 2024-02-12T19:44:35.260526Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 12 19:44:35.260974 waagent[1499]: 2024-02-12T19:44:35.260915Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 12 19:44:35.268641 waagent[1499]: 2024-02-12T19:44:35.268586Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 12 19:44:35.269102 waagent[1499]: 2024-02-12T19:44:35.269045Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 12 19:44:35.281465 waagent[1499]: 2024-02-12T19:44:35.281368Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 12 19:44:35.284180 waagent[1499]: 2024-02-12T19:44:35.284078Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 12 19:44:35.288935 waagent[1499]: 2024-02-12T19:44:35.288872Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 12 19:44:35.290374 waagent[1499]: 2024-02-12T19:44:35.290314Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 12 19:44:35.291266 waagent[1499]: 2024-02-12T19:44:35.291207Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 12 19:44:35.291524 waagent[1499]: 2024-02-12T19:44:35.291464Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:35.292106 waagent[1499]: 2024-02-12T19:44:35.292048Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:35.292669 waagent[1499]: 2024-02-12T19:44:35.292608Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 12 19:44:35.292874 waagent[1499]: 2024-02-12T19:44:35.292817Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 12 19:44:35.293016 waagent[1499]: 2024-02-12T19:44:35.292959Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 12 19:44:35.293444 waagent[1499]: 2024-02-12T19:44:35.293387Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 12 19:44:35.293845 waagent[1499]: 2024-02-12T19:44:35.293789Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 12 19:44:35.293924 waagent[1499]: 2024-02-12T19:44:35.293859Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 12 19:44:35.293924 waagent[1499]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 12 19:44:35.293924 waagent[1499]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 12 19:44:35.293924 waagent[1499]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 12 19:44:35.293924 waagent[1499]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:35.293924 waagent[1499]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:35.293924 waagent[1499]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 12 19:44:35.297780 waagent[1499]: 2024-02-12T19:44:35.297538Z INFO EnvHandler ExtHandler Configure routes Feb 12 19:44:35.299183 waagent[1499]: 2024-02-12T19:44:35.299121Z INFO EnvHandler ExtHandler Gateway:None Feb 12 19:44:35.299371 waagent[1499]: 2024-02-12T19:44:35.299299Z INFO EnvHandler ExtHandler Routes:None Feb 12 19:44:35.300275 waagent[1499]: 2024-02-12T19:44:35.300204Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 12 19:44:35.300402 waagent[1499]: 2024-02-12T19:44:35.300342Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 12 19:44:35.304391 waagent[1499]: 2024-02-12T19:44:35.304318Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 12 19:44:35.323420 waagent[1499]: 2024-02-12T19:44:35.323358Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 12 19:44:35.323707 waagent[1499]: 2024-02-12T19:44:35.323635Z INFO ExtHandler ExtHandler Downloading manifest Feb 12 19:44:35.328892 waagent[1499]: 2024-02-12T19:44:35.328829Z INFO MonitorHandler ExtHandler Network interfaces: Feb 12 19:44:35.328892 waagent[1499]: Executing ['ip', '-a', '-o', 'link']: Feb 12 19:44:35.328892 waagent[1499]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 12 19:44:35.328892 waagent[1499]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:ca:a7 brd ff:ff:ff:ff:ff:ff Feb 12 19:44:35.328892 waagent[1499]: 3: enP7782s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:ca:a7 brd ff:ff:ff:ff:ff:ff\ altname enP7782p0s2 Feb 12 19:44:35.328892 waagent[1499]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 12 19:44:35.328892 waagent[1499]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 12 19:44:35.328892 waagent[1499]: 2: eth0 inet 10.200.8.24/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 12 19:44:35.328892 waagent[1499]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 12 19:44:35.328892 waagent[1499]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 12 19:44:35.328892 waagent[1499]: 2: eth0 inet6 fe80::20d:3aff:feb3:caa7/64 scope link \ valid_lft forever preferred_lft forever Feb 12 19:44:35.386436 waagent[1499]: 2024-02-12T19:44:35.386329Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 12 19:44:35.386436 waagent[1499]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:35.386436 waagent[1499]: pkts bytes target prot opt in out source destination Feb 12 19:44:35.386436 waagent[1499]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:35.386436 waagent[1499]: pkts bytes target prot opt in out source destination Feb 12 19:44:35.386436 waagent[1499]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 12 19:44:35.386436 waagent[1499]: pkts bytes target prot opt in out source destination Feb 12 19:44:35.386436 waagent[1499]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 12 19:44:35.386436 waagent[1499]: 150 16190 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 12 19:44:35.386436 waagent[1499]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 12 19:44:35.395775 waagent[1499]: 2024-02-12T19:44:35.395717Z INFO ExtHandler ExtHandler Feb 12 19:44:35.396164 waagent[1499]: 2024-02-12T19:44:35.396105Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bdde4899-e0d1-498e-99d1-8d6753dc33e2 correlation bd4259db-2cf8-4baf-9683-dea456d5ab3e created: 2024-02-12T19:43:45.437042Z] Feb 12 19:44:35.396950 waagent[1499]: 2024-02-12T19:44:35.396889Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 12 19:44:35.398654 waagent[1499]: 2024-02-12T19:44:35.398596Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 12 19:44:35.419251 waagent[1499]: 2024-02-12T19:44:35.419192Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 12 19:44:35.428461 waagent[1499]: 2024-02-12T19:44:35.428385Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 68BC7BB3-3A45-4E29-874B-6D0EA59DCD6C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 12 19:45:09.046516 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 12 19:45:10.588883 update_engine[1301]: I0212 19:45:10.588822 1301 update_attempter.cc:509] Updating boot flags... Feb 12 19:45:23.197930 systemd[1]: Created slice system-sshd.slice. Feb 12 19:45:23.199840 systemd[1]: Started sshd@0-10.200.8.24:22-10.200.12.6:45156.service. Feb 12 19:45:23.889143 sshd[1607]: Accepted publickey for core from 10.200.12.6 port 45156 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:45:23.890811 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:23.895663 systemd-logind[1297]: New session 3 of user core. Feb 12 19:45:23.896617 systemd[1]: Started session-3.scope. Feb 12 19:45:24.428818 systemd[1]: Started sshd@1-10.200.8.24:22-10.200.12.6:45172.service. Feb 12 19:45:25.040258 sshd[1612]: Accepted publickey for core from 10.200.12.6 port 45172 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:45:25.041943 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:25.047535 systemd[1]: Started session-4.scope. Feb 12 19:45:25.048095 systemd-logind[1297]: New session 4 of user core. Feb 12 19:45:25.478422 sshd[1612]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:25.481767 systemd[1]: sshd@1-10.200.8.24:22-10.200.12.6:45172.service: Deactivated successfully. Feb 12 19:45:25.482775 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 19:45:25.483561 systemd-logind[1297]: Session 4 logged out. Waiting for processes to exit. Feb 12 19:45:25.484493 systemd-logind[1297]: Removed session 4. Feb 12 19:45:25.586420 systemd[1]: Started sshd@2-10.200.8.24:22-10.200.12.6:45186.service. Feb 12 19:45:26.201013 sshd[1618]: Accepted publickey for core from 10.200.12.6 port 45186 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:45:26.202672 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:26.208291 systemd[1]: Started session-5.scope. Feb 12 19:45:26.208751 systemd-logind[1297]: New session 5 of user core. Feb 12 19:45:26.636154 sshd[1618]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:26.639305 systemd[1]: sshd@2-10.200.8.24:22-10.200.12.6:45186.service: Deactivated successfully. Feb 12 19:45:26.640305 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 19:45:26.641116 systemd-logind[1297]: Session 5 logged out. Waiting for processes to exit. Feb 12 19:45:26.642041 systemd-logind[1297]: Removed session 5. Feb 12 19:45:26.738928 systemd[1]: Started sshd@3-10.200.8.24:22-10.200.12.6:45196.service. Feb 12 19:45:27.398549 sshd[1624]: Accepted publickey for core from 10.200.12.6 port 45196 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:45:27.401138 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:27.406122 systemd[1]: Started session-6.scope. Feb 12 19:45:27.406559 systemd-logind[1297]: New session 6 of user core. Feb 12 19:45:27.835944 sshd[1624]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:27.839180 systemd[1]: sshd@3-10.200.8.24:22-10.200.12.6:45196.service: Deactivated successfully. Feb 12 19:45:27.840166 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 19:45:27.840973 systemd-logind[1297]: Session 6 logged out. Waiting for processes to exit. Feb 12 19:45:27.841898 systemd-logind[1297]: Removed session 6. Feb 12 19:45:27.943247 systemd[1]: Started sshd@4-10.200.8.24:22-10.200.12.6:35470.service. Feb 12 19:45:28.565336 sshd[1630]: Accepted publickey for core from 10.200.12.6 port 35470 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:45:28.566954 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:45:28.571912 systemd[1]: Started session-7.scope. Feb 12 19:45:28.572344 systemd-logind[1297]: New session 7 of user core. Feb 12 19:45:28.995909 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 19:45:28.996174 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 19:45:29.585257 systemd[1]: Starting docker.service... Feb 12 19:45:29.632156 env[1648]: time="2024-02-12T19:45:29.632112076Z" level=info msg="Starting up" Feb 12 19:45:29.633569 env[1648]: time="2024-02-12T19:45:29.633535017Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:45:29.633569 env[1648]: time="2024-02-12T19:45:29.633554617Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:45:29.633744 env[1648]: time="2024-02-12T19:45:29.633575218Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:45:29.633744 env[1648]: time="2024-02-12T19:45:29.633588218Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:45:29.635386 env[1648]: time="2024-02-12T19:45:29.635364569Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 19:45:29.635487 env[1648]: time="2024-02-12T19:45:29.635476272Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 19:45:29.635537 env[1648]: time="2024-02-12T19:45:29.635527574Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 19:45:29.635578 env[1648]: time="2024-02-12T19:45:29.635570375Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 19:45:29.642221 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2074928397-merged.mount: Deactivated successfully. Feb 12 19:45:29.697910 env[1648]: time="2024-02-12T19:45:29.697856751Z" level=info msg="Loading containers: start." Feb 12 19:45:29.798720 kernel: Initializing XFRM netlink socket Feb 12 19:45:29.815530 env[1648]: time="2024-02-12T19:45:29.815488605Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 19:45:29.884242 systemd-networkd[1448]: docker0: Link UP Feb 12 19:45:29.904193 env[1648]: time="2024-02-12T19:45:29.904148633Z" level=info msg="Loading containers: done." Feb 12 19:45:29.916209 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2840104142-merged.mount: Deactivated successfully. Feb 12 19:45:29.960248 env[1648]: time="2024-02-12T19:45:29.960195631Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 19:45:29.960512 env[1648]: time="2024-02-12T19:45:29.960481639Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 19:45:29.960665 env[1648]: time="2024-02-12T19:45:29.960636843Z" level=info msg="Daemon has completed initialization" Feb 12 19:45:29.988370 systemd[1]: Started docker.service. Feb 12 19:45:29.998278 env[1648]: time="2024-02-12T19:45:29.998218915Z" level=info msg="API listen on /run/docker.sock" Feb 12 19:45:30.014919 systemd[1]: Reloading. Feb 12 19:45:30.115765 /usr/lib/systemd/system-generators/torcx-generator[1778]: time="2024-02-12T19:45:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:30.116804 /usr/lib/systemd/system-generators/torcx-generator[1778]: time="2024-02-12T19:45:30Z" level=info msg="torcx already run" Feb 12 19:45:30.185642 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:30.185663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:30.202276 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:30.290825 systemd[1]: Started kubelet.service. Feb 12 19:45:30.366511 kubelet[1840]: E0212 19:45:30.366459 1840 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:45:30.368013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:45:30.368122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:45:32.139593 env[1314]: time="2024-02-12T19:45:32.139525719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 12 19:45:32.895971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount209391216.mount: Deactivated successfully. Feb 12 19:45:34.800611 env[1314]: time="2024-02-12T19:45:34.800556542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:34.807987 env[1314]: time="2024-02-12T19:45:34.807944325Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:34.813224 env[1314]: time="2024-02-12T19:45:34.813189655Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:34.818847 env[1314]: time="2024-02-12T19:45:34.818814395Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:34.819423 env[1314]: time="2024-02-12T19:45:34.819391009Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 12 19:45:34.829090 env[1314]: time="2024-02-12T19:45:34.829069849Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 12 19:45:36.953111 env[1314]: time="2024-02-12T19:45:36.953051402Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:36.960027 env[1314]: time="2024-02-12T19:45:36.959974964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:36.964294 env[1314]: time="2024-02-12T19:45:36.964260565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:36.967641 env[1314]: time="2024-02-12T19:45:36.967600343Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:36.968253 env[1314]: time="2024-02-12T19:45:36.968215658Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 12 19:45:36.978724 env[1314]: time="2024-02-12T19:45:36.978677404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 12 19:45:38.292149 env[1314]: time="2024-02-12T19:45:38.292091952Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:38.297942 env[1314]: time="2024-02-12T19:45:38.297894381Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:38.301718 env[1314]: time="2024-02-12T19:45:38.301666665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:38.311592 env[1314]: time="2024-02-12T19:45:38.311554084Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:38.312207 env[1314]: time="2024-02-12T19:45:38.312170498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 12 19:45:38.322219 env[1314]: time="2024-02-12T19:45:38.322161520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 12 19:45:39.463904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662058535.mount: Deactivated successfully. Feb 12 19:45:40.045758 env[1314]: time="2024-02-12T19:45:40.045702405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.050556 env[1314]: time="2024-02-12T19:45:40.050516907Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.054255 env[1314]: time="2024-02-12T19:45:40.054222285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.057772 env[1314]: time="2024-02-12T19:45:40.057743559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.058126 env[1314]: time="2024-02-12T19:45:40.058093966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 12 19:45:40.067820 env[1314]: time="2024-02-12T19:45:40.067791071Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 19:45:40.511206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 19:45:40.511405 systemd[1]: Stopped kubelet.service. Feb 12 19:45:40.513307 systemd[1]: Started kubelet.service. Feb 12 19:45:40.523861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795752386.mount: Deactivated successfully. Feb 12 19:45:40.550719 env[1314]: time="2024-02-12T19:45:40.550041831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.557977 env[1314]: time="2024-02-12T19:45:40.557289684Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.560727 env[1314]: time="2024-02-12T19:45:40.560461851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.565145 env[1314]: time="2024-02-12T19:45:40.565108949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:40.565956 kubelet[1881]: E0212 19:45:40.565919 1881 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 12 19:45:40.566287 env[1314]: time="2024-02-12T19:45:40.565967467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 19:45:40.570804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 19:45:40.570973 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 19:45:40.578309 env[1314]: time="2024-02-12T19:45:40.578152523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 12 19:45:41.151188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366914995.mount: Deactivated successfully. Feb 12 19:45:45.701518 env[1314]: time="2024-02-12T19:45:45.701462943Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:45.706862 env[1314]: time="2024-02-12T19:45:45.706825741Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:45.709944 env[1314]: time="2024-02-12T19:45:45.709909898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:45.712778 env[1314]: time="2024-02-12T19:45:45.712744251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:45.713490 env[1314]: time="2024-02-12T19:45:45.713459564Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 12 19:45:45.723158 env[1314]: time="2024-02-12T19:45:45.723131042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 19:45:46.238430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933867796.mount: Deactivated successfully. Feb 12 19:45:46.932191 env[1314]: time="2024-02-12T19:45:46.932135512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:46.940600 env[1314]: time="2024-02-12T19:45:46.940557964Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:46.944456 env[1314]: time="2024-02-12T19:45:46.944425833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:46.948033 env[1314]: time="2024-02-12T19:45:46.948002797Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:46.948433 env[1314]: time="2024-02-12T19:45:46.948401005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 19:45:49.460436 systemd[1]: Stopped kubelet.service. Feb 12 19:45:49.475089 systemd[1]: Reloading. Feb 12 19:45:49.562525 /usr/lib/systemd/system-generators/torcx-generator[1979]: time="2024-02-12T19:45:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:49.567965 /usr/lib/systemd/system-generators/torcx-generator[1979]: time="2024-02-12T19:45:49Z" level=info msg="torcx already run" Feb 12 19:45:49.637275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:49.637297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:49.653389 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:49.747238 systemd[1]: Started kubelet.service. Feb 12 19:45:49.793611 kubelet[2042]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:49.793611 kubelet[2042]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:49.793611 kubelet[2042]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:49.794099 kubelet[2042]: I0212 19:45:49.793651 2042 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:45:50.080629 kubelet[2042]: I0212 19:45:50.080593 2042 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:45:50.080629 kubelet[2042]: I0212 19:45:50.080619 2042 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:45:50.080946 kubelet[2042]: I0212 19:45:50.080925 2042 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:45:50.085955 kubelet[2042]: E0212 19:45:50.085932 2042 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.086131 kubelet[2042]: I0212 19:45:50.086111 2042 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:50.090929 kubelet[2042]: I0212 19:45:50.090907 2042 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:45:50.091162 kubelet[2042]: I0212 19:45:50.091144 2042 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:45:50.091325 kubelet[2042]: I0212 19:45:50.091308 2042 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:45:50.091466 kubelet[2042]: I0212 19:45:50.091336 2042 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:45:50.091466 kubelet[2042]: I0212 19:45:50.091348 2042 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:45:50.091466 kubelet[2042]: I0212 19:45:50.091457 2042 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:50.091595 kubelet[2042]: I0212 19:45:50.091575 2042 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:45:50.091595 kubelet[2042]: I0212 19:45:50.091594 2042 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:45:50.091759 kubelet[2042]: I0212 19:45:50.091745 2042 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:45:50.091873 kubelet[2042]: I0212 19:45:50.091859 2042 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:45:50.093815 kubelet[2042]: I0212 19:45:50.093798 2042 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:45:50.094278 kubelet[2042]: W0212 19:45:50.094235 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.094422 kubelet[2042]: E0212 19:45:50.094407 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.094867 kubelet[2042]: W0212 19:45:50.094823 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e615f4b643&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.094981 kubelet[2042]: E0212 19:45:50.094970 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e615f4b643&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.096183 kubelet[2042]: W0212 19:45:50.096163 2042 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 19:45:50.098873 kubelet[2042]: I0212 19:45:50.098854 2042 server.go:1232] "Started kubelet" Feb 12 19:45:50.101813 kubelet[2042]: E0212 19:45:50.101797 2042 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:45:50.101912 kubelet[2042]: E0212 19:45:50.101903 2042 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:45:50.102257 kubelet[2042]: I0212 19:45:50.102244 2042 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:45:50.102959 kubelet[2042]: I0212 19:45:50.102941 2042 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:45:50.103958 kubelet[2042]: I0212 19:45:50.103941 2042 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:45:50.104191 kubelet[2042]: I0212 19:45:50.104179 2042 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:45:50.104606 kubelet[2042]: E0212 19:45:50.104523 2042 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e615f4b643.17b3352cc4b8be46", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e615f4b643", UID:"ci-3510.3.2-a-e615f4b643", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e615f4b643"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 45, 50, 98824774, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 45, 50, 98824774, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-e615f4b643"}': 'Post "https://10.200.8.24:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.24:6443: connect: connection refused'(may retry after sleeping) Feb 12 19:45:50.105648 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 19:45:50.105809 kubelet[2042]: I0212 19:45:50.105791 2042 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:45:50.106302 kubelet[2042]: I0212 19:45:50.106284 2042 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:45:50.108331 kubelet[2042]: I0212 19:45:50.108313 2042 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:45:50.108485 kubelet[2042]: I0212 19:45:50.108473 2042 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:45:50.109403 kubelet[2042]: E0212 19:45:50.109386 2042 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-e615f4b643\" not found" Feb 12 19:45:50.110757 kubelet[2042]: E0212 19:45:50.110741 2042 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e615f4b643?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="200ms" Feb 12 19:45:50.110975 kubelet[2042]: W0212 19:45:50.110930 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.111085 kubelet[2042]: E0212 19:45:50.111074 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.128111 kubelet[2042]: I0212 19:45:50.128080 2042 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:45:50.130022 kubelet[2042]: I0212 19:45:50.129996 2042 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:45:50.130022 kubelet[2042]: I0212 19:45:50.130025 2042 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:45:50.130156 kubelet[2042]: I0212 19:45:50.130046 2042 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:45:50.130156 kubelet[2042]: E0212 19:45:50.130097 2042 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:45:50.136069 kubelet[2042]: W0212 19:45:50.135847 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.136069 kubelet[2042]: E0212 19:45:50.135900 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.174924 kubelet[2042]: I0212 19:45:50.174854 2042 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:45:50.175073 kubelet[2042]: I0212 19:45:50.175055 2042 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:45:50.175143 kubelet[2042]: I0212 19:45:50.175079 2042 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:50.181440 kubelet[2042]: I0212 19:45:50.181415 2042 policy_none.go:49] "None policy: Start" Feb 12 19:45:50.181920 kubelet[2042]: I0212 19:45:50.181892 2042 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:45:50.181920 kubelet[2042]: I0212 19:45:50.181920 2042 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:45:50.191115 systemd[1]: Created slice kubepods.slice. Feb 12 19:45:50.195099 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 19:45:50.197980 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 19:45:50.204239 kubelet[2042]: I0212 19:45:50.204215 2042 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:45:50.204423 kubelet[2042]: I0212 19:45:50.204403 2042 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:45:50.206103 kubelet[2042]: E0212 19:45:50.205835 2042 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-e615f4b643\" not found" Feb 12 19:45:50.210845 kubelet[2042]: I0212 19:45:50.210827 2042 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.211136 kubelet[2042]: E0212 19:45:50.211120 2042 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.230432 kubelet[2042]: I0212 19:45:50.230402 2042 topology_manager.go:215] "Topology Admit Handler" podUID="a5b62034a604b7da83ebedadf8c27328" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.232019 kubelet[2042]: I0212 19:45:50.231991 2042 topology_manager.go:215] "Topology Admit Handler" podUID="50a7e0dd8ba914c0adc16e59971b4d96" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.233666 kubelet[2042]: I0212 19:45:50.233647 2042 topology_manager.go:215] "Topology Admit Handler" podUID="209ecd95900abcbad5eb1506760504f0" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.239156 systemd[1]: Created slice kubepods-burstable-poda5b62034a604b7da83ebedadf8c27328.slice. Feb 12 19:45:50.240948 kubelet[2042]: W0212 19:45:50.240916 2042 helpers.go:242] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b62034a604b7da83ebedadf8c27328.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5b62034a604b7da83ebedadf8c27328.slice/cpuset.cpus.effective: no such device Feb 12 19:45:50.248503 systemd[1]: Created slice kubepods-burstable-pod50a7e0dd8ba914c0adc16e59971b4d96.slice. Feb 12 19:45:50.257803 systemd[1]: Created slice kubepods-burstable-pod209ecd95900abcbad5eb1506760504f0.slice. Feb 12 19:45:50.309926 kubelet[2042]: I0212 19:45:50.309887 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310121 kubelet[2042]: I0212 19:45:50.309998 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310121 kubelet[2042]: I0212 19:45:50.310060 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310121 kubelet[2042]: I0212 19:45:50.310094 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310307 kubelet[2042]: I0212 19:45:50.310171 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/209ecd95900abcbad5eb1506760504f0-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e615f4b643\" (UID: \"209ecd95900abcbad5eb1506760504f0\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310307 kubelet[2042]: I0212 19:45:50.310237 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310307 kubelet[2042]: I0212 19:45:50.310303 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310469 kubelet[2042]: I0212 19:45:50.310393 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.310528 kubelet[2042]: I0212 19:45:50.310470 2042 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.311261 kubelet[2042]: E0212 19:45:50.311234 2042 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e615f4b643?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="400ms" Feb 12 19:45:50.412981 kubelet[2042]: I0212 19:45:50.412853 2042 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.415858 kubelet[2042]: E0212 19:45:50.415819 2042 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.547582 env[1314]: time="2024-02-12T19:45:50.547528154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e615f4b643,Uid:a5b62034a604b7da83ebedadf8c27328,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:50.551825 env[1314]: time="2024-02-12T19:45:50.551789923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e615f4b643,Uid:50a7e0dd8ba914c0adc16e59971b4d96,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:50.560944 env[1314]: time="2024-02-12T19:45:50.560664267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e615f4b643,Uid:209ecd95900abcbad5eb1506760504f0,Namespace:kube-system,Attempt:0,}" Feb 12 19:45:50.712935 kubelet[2042]: E0212 19:45:50.712823 2042 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e615f4b643?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="800ms" Feb 12 19:45:50.818056 kubelet[2042]: I0212 19:45:50.818016 2042 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.818717 kubelet[2042]: E0212 19:45:50.818663 2042 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:50.913484 kubelet[2042]: W0212 19:45:50.913425 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e615f4b643&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.913484 kubelet[2042]: E0212 19:45:50.913484 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e615f4b643&limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.943060 kubelet[2042]: W0212 19:45:50.943026 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:50.943060 kubelet[2042]: E0212 19:45:50.943062 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:51.029314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940602703.mount: Deactivated successfully. Feb 12 19:45:51.063073 env[1314]: time="2024-02-12T19:45:51.063022592Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.073569 env[1314]: time="2024-02-12T19:45:51.073532459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.078837 env[1314]: time="2024-02-12T19:45:51.078803642Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.083151 env[1314]: time="2024-02-12T19:45:51.083111210Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.094892 env[1314]: time="2024-02-12T19:45:51.094858196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.098722 env[1314]: time="2024-02-12T19:45:51.098681956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.107111 env[1314]: time="2024-02-12T19:45:51.107074389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.112028 env[1314]: time="2024-02-12T19:45:51.111996067Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.121076 env[1314]: time="2024-02-12T19:45:51.121037010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.128836 env[1314]: time="2024-02-12T19:45:51.128798033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.136775 env[1314]: time="2024-02-12T19:45:51.136743759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.146866 env[1314]: time="2024-02-12T19:45:51.146834118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:45:51.163854 env[1314]: time="2024-02-12T19:45:51.163786086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:51.163854 env[1314]: time="2024-02-12T19:45:51.163836087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:51.164043 env[1314]: time="2024-02-12T19:45:51.164008890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:51.164270 env[1314]: time="2024-02-12T19:45:51.164201893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce pid=2079 runtime=io.containerd.runc.v2 Feb 12 19:45:51.182234 systemd[1]: Started cri-containerd-a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce.scope. Feb 12 19:45:51.222800 env[1314]: time="2024-02-12T19:45:51.222737919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:51.223059 env[1314]: time="2024-02-12T19:45:51.223034024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:51.223169 env[1314]: time="2024-02-12T19:45:51.223149425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:51.223407 env[1314]: time="2024-02-12T19:45:51.223375729Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a pid=2118 runtime=io.containerd.runc.v2 Feb 12 19:45:51.225179 env[1314]: time="2024-02-12T19:45:51.225111556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:45:51.225360 env[1314]: time="2024-02-12T19:45:51.225334660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:45:51.225504 env[1314]: time="2024-02-12T19:45:51.225480262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:45:51.225816 env[1314]: time="2024-02-12T19:45:51.225763567Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50875bd80863548204ecd4622e34b3985f15a82fd54c72fb1025fb647cf908cc pid=2119 runtime=io.containerd.runc.v2 Feb 12 19:45:51.258041 systemd[1]: Started cri-containerd-50875bd80863548204ecd4622e34b3985f15a82fd54c72fb1025fb647cf908cc.scope. Feb 12 19:45:51.261809 systemd[1]: Started cri-containerd-86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a.scope. Feb 12 19:45:51.267970 env[1314]: time="2024-02-12T19:45:51.267822832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e615f4b643,Uid:50a7e0dd8ba914c0adc16e59971b4d96,Namespace:kube-system,Attempt:0,} returns sandbox id \"a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce\"" Feb 12 19:45:51.290225 env[1314]: time="2024-02-12T19:45:51.288440758Z" level=info msg="CreateContainer within sandbox \"a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 19:45:51.313530 kubelet[2042]: W0212 19:45:51.313472 2042 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:51.313751 kubelet[2042]: E0212 19:45:51.313736 2042 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.24:6443: connect: connection refused Feb 12 19:45:51.331071 env[1314]: time="2024-02-12T19:45:51.331028332Z" level=info msg="CreateContainer within sandbox \"a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec\"" Feb 12 19:45:51.331864 env[1314]: time="2024-02-12T19:45:51.331834545Z" level=info msg="StartContainer for \"efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec\"" Feb 12 19:45:51.341312 env[1314]: time="2024-02-12T19:45:51.341284294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e615f4b643,Uid:a5b62034a604b7da83ebedadf8c27328,Namespace:kube-system,Attempt:0,} returns sandbox id \"50875bd80863548204ecd4622e34b3985f15a82fd54c72fb1025fb647cf908cc\"" Feb 12 19:45:51.343638 env[1314]: time="2024-02-12T19:45:51.343612731Z" level=info msg="CreateContainer within sandbox \"50875bd80863548204ecd4622e34b3985f15a82fd54c72fb1025fb647cf908cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 19:45:51.362482 systemd[1]: Started cri-containerd-efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec.scope. Feb 12 19:45:51.371190 env[1314]: time="2024-02-12T19:45:51.371153767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e615f4b643,Uid:209ecd95900abcbad5eb1506760504f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a\"" Feb 12 19:45:51.376089 env[1314]: time="2024-02-12T19:45:51.376056344Z" level=info msg="CreateContainer within sandbox \"86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 19:45:51.382569 env[1314]: time="2024-02-12T19:45:51.382519146Z" level=info msg="CreateContainer within sandbox \"50875bd80863548204ecd4622e34b3985f15a82fd54c72fb1025fb647cf908cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f8cc1ea54d43f3068b433a597dd0a39d72593dca45b0affdbd06fbc94f893ff3\"" Feb 12 19:45:51.382960 env[1314]: time="2024-02-12T19:45:51.382929553Z" level=info msg="StartContainer for \"f8cc1ea54d43f3068b433a597dd0a39d72593dca45b0affdbd06fbc94f893ff3\"" Feb 12 19:45:51.404166 systemd[1]: Started cri-containerd-f8cc1ea54d43f3068b433a597dd0a39d72593dca45b0affdbd06fbc94f893ff3.scope. Feb 12 19:45:51.461366 env[1314]: time="2024-02-12T19:45:51.461199691Z" level=info msg="StartContainer for \"efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec\" returns successfully" Feb 12 19:45:51.466836 env[1314]: time="2024-02-12T19:45:51.466784779Z" level=info msg="CreateContainer within sandbox \"86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c\"" Feb 12 19:45:51.467350 env[1314]: time="2024-02-12T19:45:51.467311488Z" level=info msg="StartContainer for \"aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c\"" Feb 12 19:45:51.500165 systemd[1]: Started cri-containerd-aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c.scope. Feb 12 19:45:51.514091 kubelet[2042]: E0212 19:45:51.514060 2042 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e615f4b643?timeout=10s\": dial tcp 10.200.8.24:6443: connect: connection refused" interval="1.6s" Feb 12 19:45:51.529265 env[1314]: time="2024-02-12T19:45:51.529215667Z" level=info msg="StartContainer for \"f8cc1ea54d43f3068b433a597dd0a39d72593dca45b0affdbd06fbc94f893ff3\" returns successfully" Feb 12 19:45:51.621343 kubelet[2042]: I0212 19:45:51.621294 2042 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:51.621834 kubelet[2042]: E0212 19:45:51.621804 2042 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.24:6443/api/v1/nodes\": dial tcp 10.200.8.24:6443: connect: connection refused" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:51.646482 env[1314]: time="2024-02-12T19:45:51.646436321Z" level=info msg="StartContainer for \"aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c\" returns successfully" Feb 12 19:45:52.034220 systemd[1]: run-containerd-runc-k8s.io-a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce-runc.FzYCyl.mount: Deactivated successfully. Feb 12 19:45:53.223835 kubelet[2042]: I0212 19:45:53.223806 2042 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:53.759029 kubelet[2042]: E0212 19:45:53.758992 2042 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-e615f4b643\" not found" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:53.796143 kubelet[2042]: I0212 19:45:53.796108 2042 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:54.095411 kubelet[2042]: I0212 19:45:54.095358 2042 apiserver.go:52] "Watching apiserver" Feb 12 19:45:54.109082 kubelet[2042]: I0212 19:45:54.109044 2042 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:55.030884 kubelet[2042]: W0212 19:45:55.030847 2042 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:56.108340 kubelet[2042]: W0212 19:45:56.108302 2042 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:56.517810 systemd[1]: Reloading. Feb 12 19:45:56.630930 /usr/lib/systemd/system-generators/torcx-generator[2343]: time="2024-02-12T19:45:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 19:45:56.630965 /usr/lib/systemd/system-generators/torcx-generator[2343]: time="2024-02-12T19:45:56Z" level=info msg="torcx already run" Feb 12 19:45:56.693915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 19:45:56.693934 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 19:45:56.710338 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 19:45:56.821925 kubelet[2042]: I0212 19:45:56.821900 2042 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:56.822342 systemd[1]: Stopping kubelet.service... Feb 12 19:45:56.842036 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 19:45:56.842244 systemd[1]: Stopped kubelet.service. Feb 12 19:45:56.844149 systemd[1]: Started kubelet.service. Feb 12 19:45:56.922266 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:56.922587 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 19:45:56.922626 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 19:45:56.922806 kubelet[2397]: I0212 19:45:56.922776 2397 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 19:45:56.928670 kubelet[2397]: I0212 19:45:56.928648 2397 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 12 19:45:56.928814 kubelet[2397]: I0212 19:45:56.928803 2397 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 19:45:56.929022 kubelet[2397]: I0212 19:45:56.929010 2397 server.go:895] "Client rotation is on, will bootstrap in background" Feb 12 19:45:57.123191 kubelet[2397]: I0212 19:45:57.123102 2397 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 19:45:57.202069 kubelet[2397]: I0212 19:45:57.124546 2397 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 19:45:57.202069 kubelet[2397]: I0212 19:45:57.130714 2397 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 19:45:57.202069 kubelet[2397]: I0212 19:45:57.130935 2397 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 19:45:57.202069 kubelet[2397]: I0212 19:45:57.131074 2397 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 12 19:45:57.202069 kubelet[2397]: I0212 19:45:57.131094 2397 topology_manager.go:138] "Creating topology manager with none policy" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131102 2397 container_manager_linux.go:301] "Creating device plugin manager" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131133 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131210 2397 kubelet.go:393] "Attempting to sync node with API server" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131223 2397 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131241 2397 kubelet.go:309] "Adding apiserver pod source" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.131259 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.132356 2397 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.133019 2397 server.go:1232] "Started kubelet" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.138448 2397 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.138759 2397 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.138823 2397 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.139726 2397 server.go:462] "Adding debug handlers to kubelet server" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.143199 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.153488 2397 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.153944 2397 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 19:45:57.202505 kubelet[2397]: I0212 19:45:57.154131 2397 reconciler_new.go:29] "Reconciler: start to sync state" Feb 12 19:45:57.202505 kubelet[2397]: E0212 19:45:57.163758 2397 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 19:45:57.202505 kubelet[2397]: E0212 19:45:57.163795 2397 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 19:45:57.203182 kubelet[2397]: I0212 19:45:57.193940 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 12 19:45:57.203182 kubelet[2397]: I0212 19:45:57.195221 2397 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 12 19:45:57.203182 kubelet[2397]: I0212 19:45:57.195249 2397 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 12 19:45:57.203182 kubelet[2397]: I0212 19:45:57.195289 2397 kubelet.go:2303] "Starting kubelet main sync loop" Feb 12 19:45:57.203182 kubelet[2397]: E0212 19:45:57.195342 2397 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 19:45:57.230435 kubelet[2397]: I0212 19:45:57.230407 2397 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 19:45:57.230435 kubelet[2397]: I0212 19:45:57.230428 2397 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 19:45:57.230435 kubelet[2397]: I0212 19:45:57.230445 2397 state_mem.go:36] "Initialized new in-memory state store" Feb 12 19:45:57.230740 kubelet[2397]: I0212 19:45:57.230597 2397 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 19:45:57.230740 kubelet[2397]: I0212 19:45:57.230620 2397 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 12 19:45:57.230740 kubelet[2397]: I0212 19:45:57.230627 2397 policy_none.go:49] "None policy: Start" Feb 12 19:45:57.231339 kubelet[2397]: I0212 19:45:57.231318 2397 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 19:45:57.231339 kubelet[2397]: I0212 19:45:57.231341 2397 state_mem.go:35] "Initializing new in-memory state store" Feb 12 19:45:57.231519 kubelet[2397]: I0212 19:45:57.231479 2397 state_mem.go:75] "Updated machine memory state" Feb 12 19:45:57.235072 kubelet[2397]: I0212 19:45:57.235054 2397 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 19:45:57.235286 kubelet[2397]: I0212 19:45:57.235267 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 19:45:57.256797 kubelet[2397]: I0212 19:45:57.256748 2397 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.275716 kubelet[2397]: I0212 19:45:57.275681 2397 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.275835 kubelet[2397]: I0212 19:45:57.275753 2397 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.296015 kubelet[2397]: I0212 19:45:57.295991 2397 topology_manager.go:215] "Topology Admit Handler" podUID="a5b62034a604b7da83ebedadf8c27328" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.296149 kubelet[2397]: I0212 19:45:57.296132 2397 topology_manager.go:215] "Topology Admit Handler" podUID="50a7e0dd8ba914c0adc16e59971b4d96" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.296572 kubelet[2397]: I0212 19:45:57.296551 2397 topology_manager.go:215] "Topology Admit Handler" podUID="209ecd95900abcbad5eb1506760504f0" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.301103 kubelet[2397]: W0212 19:45:57.301079 2397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:57.310865 kubelet[2397]: W0212 19:45:57.310848 2397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:57.310972 kubelet[2397]: E0212 19:45:57.310925 2397 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.310972 kubelet[2397]: W0212 19:45:57.310859 2397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:57.311065 kubelet[2397]: E0212 19:45:57.311004 2397 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-e615f4b643\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355379 kubelet[2397]: I0212 19:45:57.355342 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/209ecd95900abcbad5eb1506760504f0-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e615f4b643\" (UID: \"209ecd95900abcbad5eb1506760504f0\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355522 kubelet[2397]: I0212 19:45:57.355397 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355522 kubelet[2397]: I0212 19:45:57.355434 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355522 kubelet[2397]: I0212 19:45:57.355466 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355522 kubelet[2397]: I0212 19:45:57.355505 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355783 kubelet[2397]: I0212 19:45:57.355542 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355783 kubelet[2397]: I0212 19:45:57.355583 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a7e0dd8ba914c0adc16e59971b4d96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e615f4b643\" (UID: \"50a7e0dd8ba914c0adc16e59971b4d96\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355783 kubelet[2397]: I0212 19:45:57.355619 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.355783 kubelet[2397]: I0212 19:45:57.355657 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a5b62034a604b7da83ebedadf8c27328-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" (UID: \"a5b62034a604b7da83ebedadf8c27328\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:57.509782 sudo[2426]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 19:45:57.510042 sudo[2426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 19:45:58.053895 sudo[2426]: pam_unix(sudo:session): session closed for user root Feb 12 19:45:58.131846 kubelet[2397]: I0212 19:45:58.131812 2397 apiserver.go:52] "Watching apiserver" Feb 12 19:45:58.154571 kubelet[2397]: I0212 19:45:58.154530 2397 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 19:45:58.222015 kubelet[2397]: W0212 19:45:58.221975 2397 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 12 19:45:58.222200 kubelet[2397]: E0212 19:45:58.222052 2397 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e615f4b643\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" Feb 12 19:45:58.240833 kubelet[2397]: I0212 19:45:58.240802 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" podStartSLOduration=2.240760809 podCreationTimestamp="2024-02-12 19:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:58.231037079 +0000 UTC m=+1.379159988" watchObservedRunningTime="2024-02-12 19:45:58.240760809 +0000 UTC m=+1.388883718" Feb 12 19:45:58.255835 kubelet[2397]: I0212 19:45:58.255805 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e615f4b643" podStartSLOduration=1.255753408 podCreationTimestamp="2024-02-12 19:45:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:58.255555106 +0000 UTC m=+1.403677915" watchObservedRunningTime="2024-02-12 19:45:58.255753408 +0000 UTC m=+1.403876317" Feb 12 19:45:58.256026 kubelet[2397]: I0212 19:45:58.255901 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e615f4b643" podStartSLOduration=3.25587981 podCreationTimestamp="2024-02-12 19:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:45:58.241251015 +0000 UTC m=+1.389373824" watchObservedRunningTime="2024-02-12 19:45:58.25587981 +0000 UTC m=+1.404002619" Feb 12 19:45:59.593822 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 12 19:45:59.693217 sshd[1630]: pam_unix(sshd:session): session closed for user core Feb 12 19:45:59.696750 systemd[1]: sshd@4-10.200.8.24:22-10.200.12.6:35470.service: Deactivated successfully. Feb 12 19:45:59.697950 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 19:45:59.698198 systemd[1]: session-7.scope: Consumed 3.645s CPU time. Feb 12 19:45:59.698866 systemd-logind[1297]: Session 7 logged out. Waiting for processes to exit. Feb 12 19:45:59.699950 systemd-logind[1297]: Removed session 7. Feb 12 19:46:10.442492 kubelet[2397]: I0212 19:46:10.442456 2397 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 19:46:10.442987 env[1314]: time="2024-02-12T19:46:10.442884257Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 19:46:10.443307 kubelet[2397]: I0212 19:46:10.443084 2397 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 19:46:11.242907 kubelet[2397]: I0212 19:46:11.242864 2397 topology_manager.go:215] "Topology Admit Handler" podUID="604b6b63-2594-47c1-9a4b-9a4c5dd4075b" podNamespace="kube-system" podName="kube-proxy-d29pb" Feb 12 19:46:11.249176 systemd[1]: Created slice kubepods-besteffort-pod604b6b63_2594_47c1_9a4b_9a4c5dd4075b.slice. Feb 12 19:46:11.275516 kubelet[2397]: I0212 19:46:11.275475 2397 topology_manager.go:215] "Topology Admit Handler" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" podNamespace="kube-system" podName="cilium-sw5mk" Feb 12 19:46:11.281456 systemd[1]: Created slice kubepods-burstable-pod243f6fa3_2f04_4547_82e7_36876982dd48.slice. Feb 12 19:46:11.282387 kubelet[2397]: W0212 19:46:11.281910 2397 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.282387 kubelet[2397]: E0212 19:46:11.281951 2397 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.282781 kubelet[2397]: W0212 19:46:11.282758 2397 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.282874 kubelet[2397]: E0212 19:46:11.282790 2397 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.286533 kubelet[2397]: W0212 19:46:11.286513 2397 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.286640 kubelet[2397]: E0212 19:46:11.286542 2397 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-e615f4b643" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-e615f4b643' and this object Feb 12 19:46:11.346309 kubelet[2397]: I0212 19:46:11.346282 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/604b6b63-2594-47c1-9a4b-9a4c5dd4075b-kube-proxy\") pod \"kube-proxy-d29pb\" (UID: \"604b6b63-2594-47c1-9a4b-9a4c5dd4075b\") " pod="kube-system/kube-proxy-d29pb" Feb 12 19:46:11.346600 kubelet[2397]: I0212 19:46:11.346582 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-bpf-maps\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.346776 kubelet[2397]: I0212 19:46:11.346762 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.346912 kubelet[2397]: I0212 19:46:11.346892 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-etc-cni-netd\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347003 kubelet[2397]: I0212 19:46:11.346942 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cni-path\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347003 kubelet[2397]: I0212 19:46:11.346971 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/604b6b63-2594-47c1-9a4b-9a4c5dd4075b-lib-modules\") pod \"kube-proxy-d29pb\" (UID: \"604b6b63-2594-47c1-9a4b-9a4c5dd4075b\") " pod="kube-system/kube-proxy-d29pb" Feb 12 19:46:11.347102 kubelet[2397]: I0212 19:46:11.347013 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h98g\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-kube-api-access-2h98g\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347102 kubelet[2397]: I0212 19:46:11.347043 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-run\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347102 kubelet[2397]: I0212 19:46:11.347074 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-hostproc\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347229 kubelet[2397]: I0212 19:46:11.347133 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-cgroup\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347229 kubelet[2397]: I0212 19:46:11.347183 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-lib-modules\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347229 kubelet[2397]: I0212 19:46:11.347212 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-hubble-tls\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347353 kubelet[2397]: I0212 19:46:11.347257 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-net\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347353 kubelet[2397]: I0212 19:46:11.347288 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/604b6b63-2594-47c1-9a4b-9a4c5dd4075b-xtables-lock\") pod \"kube-proxy-d29pb\" (UID: \"604b6b63-2594-47c1-9a4b-9a4c5dd4075b\") " pod="kube-system/kube-proxy-d29pb" Feb 12 19:46:11.347353 kubelet[2397]: I0212 19:46:11.347337 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rvxv\" (UniqueName: \"kubernetes.io/projected/604b6b63-2594-47c1-9a4b-9a4c5dd4075b-kube-api-access-6rvxv\") pod \"kube-proxy-d29pb\" (UID: \"604b6b63-2594-47c1-9a4b-9a4c5dd4075b\") " pod="kube-system/kube-proxy-d29pb" Feb 12 19:46:11.347474 kubelet[2397]: I0212 19:46:11.347373 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-xtables-lock\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347474 kubelet[2397]: I0212 19:46:11.347420 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-kernel\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.347474 kubelet[2397]: I0212 19:46:11.347450 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/243f6fa3-2f04-4547-82e7-36876982dd48-clustermesh-secrets\") pod \"cilium-sw5mk\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " pod="kube-system/cilium-sw5mk" Feb 12 19:46:11.359637 kubelet[2397]: I0212 19:46:11.359608 2397 topology_manager.go:215] "Topology Admit Handler" podUID="a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-hhglf" Feb 12 19:46:11.365561 systemd[1]: Created slice kubepods-besteffort-poda3d8eaa0_70ca_4068_b415_dbf716ca7cd9.slice. Feb 12 19:46:11.448499 kubelet[2397]: I0212 19:46:11.448458 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-hhglf\" (UID: \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\") " pod="kube-system/cilium-operator-6bc8ccdb58-hhglf" Feb 12 19:46:11.449450 kubelet[2397]: I0212 19:46:11.449419 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6thh\" (UniqueName: \"kubernetes.io/projected/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-kube-api-access-j6thh\") pod \"cilium-operator-6bc8ccdb58-hhglf\" (UID: \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\") " pod="kube-system/cilium-operator-6bc8ccdb58-hhglf" Feb 12 19:46:11.558606 env[1314]: time="2024-02-12T19:46:11.556630019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d29pb,Uid:604b6b63-2594-47c1-9a4b-9a4c5dd4075b,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:11.597734 env[1314]: time="2024-02-12T19:46:11.597627826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:11.597920 env[1314]: time="2024-02-12T19:46:11.597679826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:11.597920 env[1314]: time="2024-02-12T19:46:11.597734827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:11.598050 env[1314]: time="2024-02-12T19:46:11.597934029Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d7a0560b0fe124ed7f74ee4925e6253bb41e14ddc1c7adfd7015216e0ca82758 pid=2474 runtime=io.containerd.runc.v2 Feb 12 19:46:11.623034 systemd[1]: Started cri-containerd-d7a0560b0fe124ed7f74ee4925e6253bb41e14ddc1c7adfd7015216e0ca82758.scope. Feb 12 19:46:11.646500 env[1314]: time="2024-02-12T19:46:11.646463710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d29pb,Uid:604b6b63-2594-47c1-9a4b-9a4c5dd4075b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a0560b0fe124ed7f74ee4925e6253bb41e14ddc1c7adfd7015216e0ca82758\"" Feb 12 19:46:11.650653 env[1314]: time="2024-02-12T19:46:11.649330638Z" level=info msg="CreateContainer within sandbox \"d7a0560b0fe124ed7f74ee4925e6253bb41e14ddc1c7adfd7015216e0ca82758\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 19:46:11.687629 env[1314]: time="2024-02-12T19:46:11.687573717Z" level=info msg="CreateContainer within sandbox \"d7a0560b0fe124ed7f74ee4925e6253bb41e14ddc1c7adfd7015216e0ca82758\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b5fff26e78afe162534dc2bdae039b9c2686b0e37a31b90e020f53d3137a53dc\"" Feb 12 19:46:11.689952 env[1314]: time="2024-02-12T19:46:11.688241224Z" level=info msg="StartContainer for \"b5fff26e78afe162534dc2bdae039b9c2686b0e37a31b90e020f53d3137a53dc\"" Feb 12 19:46:11.706835 systemd[1]: Started cri-containerd-b5fff26e78afe162534dc2bdae039b9c2686b0e37a31b90e020f53d3137a53dc.scope. Feb 12 19:46:11.738720 env[1314]: time="2024-02-12T19:46:11.738648824Z" level=info msg="StartContainer for \"b5fff26e78afe162534dc2bdae039b9c2686b0e37a31b90e020f53d3137a53dc\" returns successfully" Feb 12 19:46:12.252421 kubelet[2397]: I0212 19:46:12.252387 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-d29pb" podStartSLOduration=1.252346565 podCreationTimestamp="2024-02-12 19:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:12.252248364 +0000 UTC m=+15.400371173" watchObservedRunningTime="2024-02-12 19:46:12.252346565 +0000 UTC m=+15.400469474" Feb 12 19:46:12.450340 kubelet[2397]: E0212 19:46:12.450290 2397 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:12.450944 kubelet[2397]: E0212 19:46:12.450417 2397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path podName:243f6fa3-2f04-4547-82e7-36876982dd48 nodeName:}" failed. No retries permitted until 2024-02-12 19:46:12.950386187 +0000 UTC m=+16.098508996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path") pod "cilium-sw5mk" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:12.550908 kubelet[2397]: E0212 19:46:12.550776 2397 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:12.550908 kubelet[2397]: E0212 19:46:12.550876 2397 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path podName:a3d8eaa0-70ca-4068-b415-dbf716ca7cd9 nodeName:}" failed. No retries permitted until 2024-02-12 19:46:13.050850362 +0000 UTC m=+16.198973171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path") pod "cilium-operator-6bc8ccdb58-hhglf" (UID: "a3d8eaa0-70ca-4068-b415-dbf716ca7cd9") : failed to sync configmap cache: timed out waiting for the condition Feb 12 19:46:13.085712 env[1314]: time="2024-02-12T19:46:13.085651237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw5mk,Uid:243f6fa3-2f04-4547-82e7-36876982dd48,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:13.125290 env[1314]: time="2024-02-12T19:46:13.125223513Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:13.125440 env[1314]: time="2024-02-12T19:46:13.125257013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:13.125440 env[1314]: time="2024-02-12T19:46:13.125271213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:13.125712 env[1314]: time="2024-02-12T19:46:13.125651717Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b pid=2672 runtime=io.containerd.runc.v2 Feb 12 19:46:13.141474 systemd[1]: Started cri-containerd-13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b.scope. Feb 12 19:46:13.146722 systemd[1]: run-containerd-runc-k8s.io-13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b-runc.pqUZAZ.mount: Deactivated successfully. Feb 12 19:46:13.170209 env[1314]: time="2024-02-12T19:46:13.170167140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hhglf,Uid:a3d8eaa0-70ca-4068-b415-dbf716ca7cd9,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:13.180013 env[1314]: time="2024-02-12T19:46:13.179970233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sw5mk,Uid:243f6fa3-2f04-4547-82e7-36876982dd48,Namespace:kube-system,Attempt:0,} returns sandbox id \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\"" Feb 12 19:46:13.182593 env[1314]: time="2024-02-12T19:46:13.181736950Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 19:46:13.210655 env[1314]: time="2024-02-12T19:46:13.210589224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:13.210655 env[1314]: time="2024-02-12T19:46:13.210628925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:13.210886 env[1314]: time="2024-02-12T19:46:13.210836627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:13.211131 env[1314]: time="2024-02-12T19:46:13.211079329Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746 pid=2714 runtime=io.containerd.runc.v2 Feb 12 19:46:13.223608 systemd[1]: Started cri-containerd-8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746.scope. Feb 12 19:46:13.265881 env[1314]: time="2024-02-12T19:46:13.265217644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-hhglf,Uid:a3d8eaa0-70ca-4068-b415-dbf716ca7cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\"" Feb 12 19:46:18.835543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274535097.mount: Deactivated successfully. Feb 12 19:46:21.578912 env[1314]: time="2024-02-12T19:46:21.578859121Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.589555 env[1314]: time="2024-02-12T19:46:21.589516608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.595352 env[1314]: time="2024-02-12T19:46:21.595315355Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:21.595952 env[1314]: time="2024-02-12T19:46:21.595920760Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 19:46:21.597381 env[1314]: time="2024-02-12T19:46:21.596988168Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 19:46:21.598797 env[1314]: time="2024-02-12T19:46:21.598753383Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:46:21.644867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount628040392.mount: Deactivated successfully. Feb 12 19:46:21.651527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477725636.mount: Deactivated successfully. Feb 12 19:46:21.664447 env[1314]: time="2024-02-12T19:46:21.664401715Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\"" Feb 12 19:46:21.665094 env[1314]: time="2024-02-12T19:46:21.665067820Z" level=info msg="StartContainer for \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\"" Feb 12 19:46:21.684825 systemd[1]: Started cri-containerd-75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6.scope. Feb 12 19:46:21.714709 env[1314]: time="2024-02-12T19:46:21.713852415Z" level=info msg="StartContainer for \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\" returns successfully" Feb 12 19:46:21.720786 systemd[1]: cri-containerd-75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6.scope: Deactivated successfully. Feb 12 19:46:22.642360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6-rootfs.mount: Deactivated successfully. Feb 12 19:46:25.413645 env[1314]: time="2024-02-12T19:46:25.413585071Z" level=info msg="shim disconnected" id=75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6 Feb 12 19:46:25.413645 env[1314]: time="2024-02-12T19:46:25.413640072Z" level=warning msg="cleaning up after shim disconnected" id=75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6 namespace=k8s.io Feb 12 19:46:25.413645 env[1314]: time="2024-02-12T19:46:25.413652572Z" level=info msg="cleaning up dead shim" Feb 12 19:46:25.421957 env[1314]: time="2024-02-12T19:46:25.421915434Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2799 runtime=io.containerd.runc.v2\n" Feb 12 19:46:25.998045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount886758943.mount: Deactivated successfully. Feb 12 19:46:26.292144 env[1314]: time="2024-02-12T19:46:26.292023144Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:46:26.323364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960088125.mount: Deactivated successfully. Feb 12 19:46:26.337653 env[1314]: time="2024-02-12T19:46:26.337603681Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\"" Feb 12 19:46:26.340317 env[1314]: time="2024-02-12T19:46:26.340244300Z" level=info msg="StartContainer for \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\"" Feb 12 19:46:26.376684 systemd[1]: Started cri-containerd-445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee.scope. Feb 12 19:46:26.424208 env[1314]: time="2024-02-12T19:46:26.424163121Z" level=info msg="StartContainer for \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\" returns successfully" Feb 12 19:46:26.432123 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 19:46:26.432411 systemd[1]: Stopped systemd-sysctl.service. Feb 12 19:46:26.433047 systemd[1]: Stopping systemd-sysctl.service... Feb 12 19:46:26.435369 systemd[1]: Starting systemd-sysctl.service... Feb 12 19:46:26.444040 systemd[1]: cri-containerd-445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee.scope: Deactivated successfully. Feb 12 19:46:26.450510 systemd[1]: Finished systemd-sysctl.service. Feb 12 19:46:26.571498 env[1314]: time="2024-02-12T19:46:26.571442410Z" level=info msg="shim disconnected" id=445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee Feb 12 19:46:26.571498 env[1314]: time="2024-02-12T19:46:26.571496210Z" level=warning msg="cleaning up after shim disconnected" id=445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee namespace=k8s.io Feb 12 19:46:26.571832 env[1314]: time="2024-02-12T19:46:26.571509710Z" level=info msg="cleaning up dead shim" Feb 12 19:46:26.589168 env[1314]: time="2024-02-12T19:46:26.589127740Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2863 runtime=io.containerd.runc.v2\n" Feb 12 19:46:26.993129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2805027690.mount: Deactivated successfully. Feb 12 19:46:27.035111 env[1314]: time="2024-02-12T19:46:27.035065433Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:27.040868 env[1314]: time="2024-02-12T19:46:27.040829375Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:27.044731 env[1314]: time="2024-02-12T19:46:27.044678303Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 19:46:27.045264 env[1314]: time="2024-02-12T19:46:27.045222606Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 19:46:27.047485 env[1314]: time="2024-02-12T19:46:27.047451223Z" level=info msg="CreateContainer within sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 19:46:27.068240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114584912.mount: Deactivated successfully. Feb 12 19:46:27.074926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400405742.mount: Deactivated successfully. Feb 12 19:46:27.084002 env[1314]: time="2024-02-12T19:46:27.083960488Z" level=info msg="CreateContainer within sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\"" Feb 12 19:46:27.086440 env[1314]: time="2024-02-12T19:46:27.084647493Z" level=info msg="StartContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\"" Feb 12 19:46:27.102454 systemd[1]: Started cri-containerd-7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4.scope. Feb 12 19:46:27.138308 env[1314]: time="2024-02-12T19:46:27.138258882Z" level=info msg="StartContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" returns successfully" Feb 12 19:46:27.290007 env[1314]: time="2024-02-12T19:46:27.289317780Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:46:27.333159 env[1314]: time="2024-02-12T19:46:27.333101398Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\"" Feb 12 19:46:27.334111 env[1314]: time="2024-02-12T19:46:27.334072605Z" level=info msg="StartContainer for \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\"" Feb 12 19:46:27.365129 systemd[1]: Started cri-containerd-335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679.scope. Feb 12 19:46:27.426164 env[1314]: time="2024-02-12T19:46:27.426114673Z" level=info msg="StartContainer for \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\" returns successfully" Feb 12 19:46:27.428677 systemd[1]: cri-containerd-335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679.scope: Deactivated successfully. Feb 12 19:46:27.815913 env[1314]: time="2024-02-12T19:46:27.815860704Z" level=info msg="shim disconnected" id=335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679 Feb 12 19:46:27.816174 env[1314]: time="2024-02-12T19:46:27.815918905Z" level=warning msg="cleaning up after shim disconnected" id=335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679 namespace=k8s.io Feb 12 19:46:27.816174 env[1314]: time="2024-02-12T19:46:27.815931105Z" level=info msg="cleaning up dead shim" Feb 12 19:46:27.831265 env[1314]: time="2024-02-12T19:46:27.831213416Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2957 runtime=io.containerd.runc.v2\n" Feb 12 19:46:28.314101 env[1314]: time="2024-02-12T19:46:28.314057584Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:46:28.326757 kubelet[2397]: I0212 19:46:28.326723 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-hhglf" podStartSLOduration=3.548071326 podCreationTimestamp="2024-02-12 19:46:11 +0000 UTC" firstStartedPulling="2024-02-12 19:46:13.266995761 +0000 UTC m=+16.415118570" lastFinishedPulling="2024-02-12 19:46:27.045591009 +0000 UTC m=+30.193713818" observedRunningTime="2024-02-12 19:46:27.339829346 +0000 UTC m=+30.487952255" watchObservedRunningTime="2024-02-12 19:46:28.326666574 +0000 UTC m=+31.474789483" Feb 12 19:46:28.349816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1964472974.mount: Deactivated successfully. Feb 12 19:46:28.361493 env[1314]: time="2024-02-12T19:46:28.361438222Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\"" Feb 12 19:46:28.362005 env[1314]: time="2024-02-12T19:46:28.361971126Z" level=info msg="StartContainer for \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\"" Feb 12 19:46:28.381851 systemd[1]: Started cri-containerd-46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48.scope. Feb 12 19:46:28.412227 systemd[1]: cri-containerd-46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48.scope: Deactivated successfully. Feb 12 19:46:28.414523 env[1314]: time="2024-02-12T19:46:28.414401700Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod243f6fa3_2f04_4547_82e7_36876982dd48.slice/cri-containerd-46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48.scope/memory.events\": no such file or directory" Feb 12 19:46:28.418782 env[1314]: time="2024-02-12T19:46:28.418747231Z" level=info msg="StartContainer for \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\" returns successfully" Feb 12 19:46:28.444876 env[1314]: time="2024-02-12T19:46:28.444832917Z" level=info msg="shim disconnected" id=46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48 Feb 12 19:46:28.444876 env[1314]: time="2024-02-12T19:46:28.444876018Z" level=warning msg="cleaning up after shim disconnected" id=46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48 namespace=k8s.io Feb 12 19:46:28.445320 env[1314]: time="2024-02-12T19:46:28.444887018Z" level=info msg="cleaning up dead shim" Feb 12 19:46:28.452183 env[1314]: time="2024-02-12T19:46:28.452149770Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:46:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3010 runtime=io.containerd.runc.v2\n" Feb 12 19:46:29.307453 env[1314]: time="2024-02-12T19:46:29.307403938Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:46:29.343712 env[1314]: time="2024-02-12T19:46:29.343645292Z" level=info msg="CreateContainer within sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\"" Feb 12 19:46:29.344435 env[1314]: time="2024-02-12T19:46:29.344402998Z" level=info msg="StartContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\"" Feb 12 19:46:29.374683 systemd[1]: Started cri-containerd-3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f.scope. Feb 12 19:46:29.412518 env[1314]: time="2024-02-12T19:46:29.412472675Z" level=info msg="StartContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" returns successfully" Feb 12 19:46:29.534445 kubelet[2397]: I0212 19:46:29.533509 2397 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 19:46:29.556109 kubelet[2397]: I0212 19:46:29.556062 2397 topology_manager.go:215] "Topology Admit Handler" podUID="dc1daa51-266d-48c5-8105-fa9a4ce0b741" podNamespace="kube-system" podName="coredns-5dd5756b68-pnzps" Feb 12 19:46:29.562881 systemd[1]: Created slice kubepods-burstable-poddc1daa51_266d_48c5_8105_fa9a4ce0b741.slice. Feb 12 19:46:29.564970 kubelet[2397]: I0212 19:46:29.564899 2397 topology_manager.go:215] "Topology Admit Handler" podUID="35335533-b6f6-46f1-ba4d-1dce47ead46e" podNamespace="kube-system" podName="coredns-5dd5756b68-9bvpm" Feb 12 19:46:29.571987 systemd[1]: Created slice kubepods-burstable-pod35335533_b6f6_46f1_ba4d_1dce47ead46e.slice. Feb 12 19:46:29.577675 kubelet[2397]: I0212 19:46:29.577650 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw5lj\" (UniqueName: \"kubernetes.io/projected/dc1daa51-266d-48c5-8105-fa9a4ce0b741-kube-api-access-bw5lj\") pod \"coredns-5dd5756b68-pnzps\" (UID: \"dc1daa51-266d-48c5-8105-fa9a4ce0b741\") " pod="kube-system/coredns-5dd5756b68-pnzps" Feb 12 19:46:29.577930 kubelet[2397]: I0212 19:46:29.577907 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc1daa51-266d-48c5-8105-fa9a4ce0b741-config-volume\") pod \"coredns-5dd5756b68-pnzps\" (UID: \"dc1daa51-266d-48c5-8105-fa9a4ce0b741\") " pod="kube-system/coredns-5dd5756b68-pnzps" Feb 12 19:46:29.578074 kubelet[2397]: I0212 19:46:29.578061 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35335533-b6f6-46f1-ba4d-1dce47ead46e-config-volume\") pod \"coredns-5dd5756b68-9bvpm\" (UID: \"35335533-b6f6-46f1-ba4d-1dce47ead46e\") " pod="kube-system/coredns-5dd5756b68-9bvpm" Feb 12 19:46:29.578223 kubelet[2397]: I0212 19:46:29.578207 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfxz\" (UniqueName: \"kubernetes.io/projected/35335533-b6f6-46f1-ba4d-1dce47ead46e-kube-api-access-jdfxz\") pod \"coredns-5dd5756b68-9bvpm\" (UID: \"35335533-b6f6-46f1-ba4d-1dce47ead46e\") " pod="kube-system/coredns-5dd5756b68-9bvpm" Feb 12 19:46:29.867995 env[1314]: time="2024-02-12T19:46:29.867889471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pnzps,Uid:dc1daa51-266d-48c5-8105-fa9a4ce0b741,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:29.877314 env[1314]: time="2024-02-12T19:46:29.877274737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9bvpm,Uid:35335533-b6f6-46f1-ba4d-1dce47ead46e,Namespace:kube-system,Attempt:0,}" Feb 12 19:46:30.000478 systemd[1]: run-containerd-runc-k8s.io-3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f-runc.CX2j0u.mount: Deactivated successfully. Feb 12 19:46:31.513409 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 19:46:31.518727 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 19:46:31.523443 systemd-networkd[1448]: cilium_host: Link UP Feb 12 19:46:31.523626 systemd-networkd[1448]: cilium_net: Link UP Feb 12 19:46:31.523847 systemd-networkd[1448]: cilium_net: Gained carrier Feb 12 19:46:31.524091 systemd-networkd[1448]: cilium_host: Gained carrier Feb 12 19:46:31.634537 systemd-networkd[1448]: cilium_vxlan: Link UP Feb 12 19:46:31.634546 systemd-networkd[1448]: cilium_vxlan: Gained carrier Feb 12 19:46:31.757836 systemd-networkd[1448]: cilium_net: Gained IPv6LL Feb 12 19:46:31.848723 kernel: NET: Registered PF_ALG protocol family Feb 12 19:46:32.491219 systemd-networkd[1448]: lxc_health: Link UP Feb 12 19:46:32.508872 systemd-networkd[1448]: cilium_host: Gained IPv6LL Feb 12 19:46:32.523858 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:46:32.523132 systemd-networkd[1448]: lxc_health: Gained carrier Feb 12 19:46:32.944686 systemd-networkd[1448]: lxc1cc6e7b65418: Link UP Feb 12 19:46:32.952717 kernel: eth0: renamed from tmpa035c Feb 12 19:46:32.964714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1cc6e7b65418: link becomes ready Feb 12 19:46:32.964869 systemd-networkd[1448]: lxc1cc6e7b65418: Gained carrier Feb 12 19:46:32.977036 systemd-networkd[1448]: lxc1e6c7572d815: Link UP Feb 12 19:46:32.987722 kernel: eth0: renamed from tmp58502 Feb 12 19:46:32.997875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1e6c7572d815: link becomes ready Feb 12 19:46:32.997681 systemd-networkd[1448]: lxc1e6c7572d815: Gained carrier Feb 12 19:46:33.020937 systemd-networkd[1448]: cilium_vxlan: Gained IPv6LL Feb 12 19:46:33.118546 kubelet[2397]: I0212 19:46:33.118494 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-sw5mk" podStartSLOduration=13.703232121 podCreationTimestamp="2024-02-12 19:46:11 +0000 UTC" firstStartedPulling="2024-02-12 19:46:13.181255046 +0000 UTC m=+16.329377955" lastFinishedPulling="2024-02-12 19:46:21.596473364 +0000 UTC m=+24.744596273" observedRunningTime="2024-02-12 19:46:30.325596045 +0000 UTC m=+33.473718854" watchObservedRunningTime="2024-02-12 19:46:33.118450439 +0000 UTC m=+36.266573248" Feb 12 19:46:34.108841 systemd-networkd[1448]: lxc1e6c7572d815: Gained IPv6LL Feb 12 19:46:34.236833 systemd-networkd[1448]: lxc1cc6e7b65418: Gained IPv6LL Feb 12 19:46:34.492824 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 12 19:46:36.709788 env[1314]: time="2024-02-12T19:46:36.709721510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:36.710378 env[1314]: time="2024-02-12T19:46:36.710308714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:36.710378 env[1314]: time="2024-02-12T19:46:36.710350214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:36.711086 env[1314]: time="2024-02-12T19:46:36.710770617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a035c82884c49318d92c6023076e49d38b5dab12fce9d331f6324d76d60b99a2 pid=3559 runtime=io.containerd.runc.v2 Feb 12 19:46:36.714420 env[1314]: time="2024-02-12T19:46:36.714297939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:46:36.714420 env[1314]: time="2024-02-12T19:46:36.714386539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:46:36.714420 env[1314]: time="2024-02-12T19:46:36.714415239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:46:36.714615 env[1314]: time="2024-02-12T19:46:36.714573340Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922 pid=3575 runtime=io.containerd.runc.v2 Feb 12 19:46:36.766549 systemd[1]: Started cri-containerd-a035c82884c49318d92c6023076e49d38b5dab12fce9d331f6324d76d60b99a2.scope. Feb 12 19:46:36.787626 systemd[1]: Started cri-containerd-58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922.scope. Feb 12 19:46:36.876456 env[1314]: time="2024-02-12T19:46:36.876393955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9bvpm,Uid:35335533-b6f6-46f1-ba4d-1dce47ead46e,Namespace:kube-system,Attempt:0,} returns sandbox id \"58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922\"" Feb 12 19:46:36.883026 env[1314]: time="2024-02-12T19:46:36.882989296Z" level=info msg="CreateContainer within sandbox \"58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:46:36.892157 env[1314]: time="2024-02-12T19:46:36.892104153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-pnzps,Uid:dc1daa51-266d-48c5-8105-fa9a4ce0b741,Namespace:kube-system,Attempt:0,} returns sandbox id \"a035c82884c49318d92c6023076e49d38b5dab12fce9d331f6324d76d60b99a2\"" Feb 12 19:46:36.897912 env[1314]: time="2024-02-12T19:46:36.897879289Z" level=info msg="CreateContainer within sandbox \"a035c82884c49318d92c6023076e49d38b5dab12fce9d331f6324d76d60b99a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 19:46:36.938920 env[1314]: time="2024-02-12T19:46:36.938865746Z" level=info msg="CreateContainer within sandbox \"58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a36be16efa79a1dbbc3070b3547a956cc669ce7be4780553a8e09741e5bf6cf\"" Feb 12 19:46:36.939750 env[1314]: time="2024-02-12T19:46:36.939709352Z" level=info msg="StartContainer for \"8a36be16efa79a1dbbc3070b3547a956cc669ce7be4780553a8e09741e5bf6cf\"" Feb 12 19:46:36.967712 env[1314]: time="2024-02-12T19:46:36.966773021Z" level=info msg="CreateContainer within sandbox \"a035c82884c49318d92c6023076e49d38b5dab12fce9d331f6324d76d60b99a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0d4ab99964f3ddb74d0bd4479d5e9a747d3938ff59d50643ec754b32308f0b3\"" Feb 12 19:46:36.967712 env[1314]: time="2024-02-12T19:46:36.967516626Z" level=info msg="StartContainer for \"f0d4ab99964f3ddb74d0bd4479d5e9a747d3938ff59d50643ec754b32308f0b3\"" Feb 12 19:46:36.978230 systemd[1]: Started cri-containerd-8a36be16efa79a1dbbc3070b3547a956cc669ce7be4780553a8e09741e5bf6cf.scope. Feb 12 19:46:37.009020 systemd[1]: Started cri-containerd-f0d4ab99964f3ddb74d0bd4479d5e9a747d3938ff59d50643ec754b32308f0b3.scope. Feb 12 19:46:37.053980 env[1314]: time="2024-02-12T19:46:37.053924363Z" level=info msg="StartContainer for \"8a36be16efa79a1dbbc3070b3547a956cc669ce7be4780553a8e09741e5bf6cf\" returns successfully" Feb 12 19:46:37.073759 env[1314]: time="2024-02-12T19:46:37.073680985Z" level=info msg="StartContainer for \"f0d4ab99964f3ddb74d0bd4479d5e9a747d3938ff59d50643ec754b32308f0b3\" returns successfully" Feb 12 19:46:37.348383 kubelet[2397]: I0212 19:46:37.348345 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-pnzps" podStartSLOduration=26.34830498 podCreationTimestamp="2024-02-12 19:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:37.346999372 +0000 UTC m=+40.495122181" watchObservedRunningTime="2024-02-12 19:46:37.34830498 +0000 UTC m=+40.496427789" Feb 12 19:46:37.348825 kubelet[2397]: I0212 19:46:37.348440 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9bvpm" podStartSLOduration=26.348419681 podCreationTimestamp="2024-02-12 19:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:46:37.337294212 +0000 UTC m=+40.485417021" watchObservedRunningTime="2024-02-12 19:46:37.348419681 +0000 UTC m=+40.496542590" Feb 12 19:46:37.722276 systemd[1]: run-containerd-runc-k8s.io-58502255a382fb5731c239ac332122eb9330bbc27b0e6a83cb475727e9da4922-runc.P8Uilc.mount: Deactivated successfully. Feb 12 19:46:41.432718 kubelet[2397]: I0212 19:46:41.432480 2397 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 12 19:48:24.371054 systemd[1]: Started sshd@5-10.200.8.24:22-10.200.12.6:36000.service. Feb 12 19:48:25.009926 sshd[3727]: Accepted publickey for core from 10.200.12.6 port 36000 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:25.011444 sshd[3727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:25.016226 systemd-logind[1297]: New session 8 of user core. Feb 12 19:48:25.017262 systemd[1]: Started session-8.scope. Feb 12 19:48:25.592893 sshd[3727]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:25.596460 systemd[1]: sshd@5-10.200.8.24:22-10.200.12.6:36000.service: Deactivated successfully. Feb 12 19:48:25.597433 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 19:48:25.597942 systemd-logind[1297]: Session 8 logged out. Waiting for processes to exit. Feb 12 19:48:25.598851 systemd-logind[1297]: Removed session 8. Feb 12 19:48:30.699372 systemd[1]: Started sshd@6-10.200.8.24:22-10.200.12.6:34256.service. Feb 12 19:48:31.313719 sshd[3740]: Accepted publickey for core from 10.200.12.6 port 34256 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:31.315404 sshd[3740]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:31.321353 systemd[1]: Started session-9.scope. Feb 12 19:48:31.322214 systemd-logind[1297]: New session 9 of user core. Feb 12 19:48:31.802973 sshd[3740]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:31.807726 systemd[1]: sshd@6-10.200.8.24:22-10.200.12.6:34256.service: Deactivated successfully. Feb 12 19:48:31.808989 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 19:48:31.809904 systemd-logind[1297]: Session 9 logged out. Waiting for processes to exit. Feb 12 19:48:31.810784 systemd-logind[1297]: Removed session 9. Feb 12 19:48:36.907155 systemd[1]: Started sshd@7-10.200.8.24:22-10.200.12.6:34268.service. Feb 12 19:48:37.516305 sshd[3753]: Accepted publickey for core from 10.200.12.6 port 34268 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:37.518067 sshd[3753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:37.522995 systemd-logind[1297]: New session 10 of user core. Feb 12 19:48:37.523999 systemd[1]: Started session-10.scope. Feb 12 19:48:38.008419 sshd[3753]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:38.011760 systemd[1]: sshd@7-10.200.8.24:22-10.200.12.6:34268.service: Deactivated successfully. Feb 12 19:48:38.012921 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 19:48:38.013806 systemd-logind[1297]: Session 10 logged out. Waiting for processes to exit. Feb 12 19:48:38.014771 systemd-logind[1297]: Removed session 10. Feb 12 19:48:43.113271 systemd[1]: Started sshd@8-10.200.8.24:22-10.200.12.6:59720.service. Feb 12 19:48:43.733759 sshd[3769]: Accepted publickey for core from 10.200.12.6 port 59720 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:43.735236 sshd[3769]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:43.740389 systemd-logind[1297]: New session 11 of user core. Feb 12 19:48:43.741138 systemd[1]: Started session-11.scope. Feb 12 19:48:44.223060 sshd[3769]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:44.226336 systemd[1]: sshd@8-10.200.8.24:22-10.200.12.6:59720.service: Deactivated successfully. Feb 12 19:48:44.227531 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 19:48:44.228533 systemd-logind[1297]: Session 11 logged out. Waiting for processes to exit. Feb 12 19:48:44.229543 systemd-logind[1297]: Removed session 11. Feb 12 19:48:49.327162 systemd[1]: Started sshd@9-10.200.8.24:22-10.200.12.6:57190.service. Feb 12 19:48:49.936125 sshd[3782]: Accepted publickey for core from 10.200.12.6 port 57190 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:49.938106 sshd[3782]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:49.943339 systemd[1]: Started session-12.scope. Feb 12 19:48:49.943796 systemd-logind[1297]: New session 12 of user core. Feb 12 19:48:50.423577 sshd[3782]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:50.427114 systemd[1]: sshd@9-10.200.8.24:22-10.200.12.6:57190.service: Deactivated successfully. Feb 12 19:48:50.428381 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 19:48:50.429133 systemd-logind[1297]: Session 12 logged out. Waiting for processes to exit. Feb 12 19:48:50.429987 systemd-logind[1297]: Removed session 12. Feb 12 19:48:55.530633 systemd[1]: Started sshd@10-10.200.8.24:22-10.200.12.6:57194.service. Feb 12 19:48:56.181352 sshd[3796]: Accepted publickey for core from 10.200.12.6 port 57194 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:48:56.182776 sshd[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:48:56.187517 systemd-logind[1297]: New session 13 of user core. Feb 12 19:48:56.187613 systemd[1]: Started session-13.scope. Feb 12 19:48:56.678014 sshd[3796]: pam_unix(sshd:session): session closed for user core Feb 12 19:48:56.681257 systemd[1]: sshd@10-10.200.8.24:22-10.200.12.6:57194.service: Deactivated successfully. Feb 12 19:48:56.682154 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 19:48:56.682930 systemd-logind[1297]: Session 13 logged out. Waiting for processes to exit. Feb 12 19:48:56.683718 systemd-logind[1297]: Removed session 13. Feb 12 19:49:01.781532 systemd[1]: Started sshd@11-10.200.8.24:22-10.200.12.6:41038.service. Feb 12 19:49:02.393170 sshd[3810]: Accepted publickey for core from 10.200.12.6 port 41038 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:02.394820 sshd[3810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:02.400969 systemd-logind[1297]: New session 14 of user core. Feb 12 19:49:02.401548 systemd[1]: Started session-14.scope. Feb 12 19:49:02.900059 sshd[3810]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:02.903156 systemd[1]: sshd@11-10.200.8.24:22-10.200.12.6:41038.service: Deactivated successfully. Feb 12 19:49:02.904136 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 19:49:02.904953 systemd-logind[1297]: Session 14 logged out. Waiting for processes to exit. Feb 12 19:49:02.905773 systemd-logind[1297]: Removed session 14. Feb 12 19:49:03.006645 systemd[1]: Started sshd@12-10.200.8.24:22-10.200.12.6:41046.service. Feb 12 19:49:03.624221 sshd[3823]: Accepted publickey for core from 10.200.12.6 port 41046 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:03.625951 sshd[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:03.631129 systemd[1]: Started session-15.scope. Feb 12 19:49:03.631590 systemd-logind[1297]: New session 15 of user core. Feb 12 19:49:04.767524 sshd[3823]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:04.771267 systemd[1]: sshd@12-10.200.8.24:22-10.200.12.6:41046.service: Deactivated successfully. Feb 12 19:49:04.772462 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 19:49:04.773354 systemd-logind[1297]: Session 15 logged out. Waiting for processes to exit. Feb 12 19:49:04.774302 systemd-logind[1297]: Removed session 15. Feb 12 19:49:04.872004 systemd[1]: Started sshd@13-10.200.8.24:22-10.200.12.6:41058.service. Feb 12 19:49:05.487281 sshd[3833]: Accepted publickey for core from 10.200.12.6 port 41058 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:05.488680 sshd[3833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:05.493444 systemd-logind[1297]: New session 16 of user core. Feb 12 19:49:05.493984 systemd[1]: Started session-16.scope. Feb 12 19:49:05.981144 sshd[3833]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:05.984452 systemd[1]: sshd@13-10.200.8.24:22-10.200.12.6:41058.service: Deactivated successfully. Feb 12 19:49:05.985381 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 19:49:05.986118 systemd-logind[1297]: Session 16 logged out. Waiting for processes to exit. Feb 12 19:49:05.986959 systemd-logind[1297]: Removed session 16. Feb 12 19:49:11.087645 systemd[1]: Started sshd@14-10.200.8.24:22-10.200.12.6:52684.service. Feb 12 19:49:11.707676 sshd[3845]: Accepted publickey for core from 10.200.12.6 port 52684 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:11.709180 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:11.714225 systemd[1]: Started session-17.scope. Feb 12 19:49:11.714667 systemd-logind[1297]: New session 17 of user core. Feb 12 19:49:12.205449 sshd[3845]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:12.208984 systemd[1]: sshd@14-10.200.8.24:22-10.200.12.6:52684.service: Deactivated successfully. Feb 12 19:49:12.210173 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 19:49:12.210964 systemd-logind[1297]: Session 17 logged out. Waiting for processes to exit. Feb 12 19:49:12.211823 systemd-logind[1297]: Removed session 17. Feb 12 19:49:12.311209 systemd[1]: Started sshd@15-10.200.8.24:22-10.200.12.6:52696.service. Feb 12 19:49:12.931249 sshd[3859]: Accepted publickey for core from 10.200.12.6 port 52696 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:12.932986 sshd[3859]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:12.938339 systemd[1]: Started session-18.scope. Feb 12 19:49:12.939278 systemd-logind[1297]: New session 18 of user core. Feb 12 19:49:13.494614 sshd[3859]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:13.497718 systemd[1]: sshd@15-10.200.8.24:22-10.200.12.6:52696.service: Deactivated successfully. Feb 12 19:49:13.498712 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 19:49:13.499393 systemd-logind[1297]: Session 18 logged out. Waiting for processes to exit. Feb 12 19:49:13.500279 systemd-logind[1297]: Removed session 18. Feb 12 19:49:13.598147 systemd[1]: Started sshd@16-10.200.8.24:22-10.200.12.6:52702.service. Feb 12 19:49:14.231505 sshd[3869]: Accepted publickey for core from 10.200.12.6 port 52702 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:14.233075 sshd[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:14.237350 systemd-logind[1297]: New session 19 of user core. Feb 12 19:49:14.238955 systemd[1]: Started session-19.scope. Feb 12 19:49:15.699672 sshd[3869]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:15.703244 systemd[1]: sshd@16-10.200.8.24:22-10.200.12.6:52702.service: Deactivated successfully. Feb 12 19:49:15.704230 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 19:49:15.704930 systemd-logind[1297]: Session 19 logged out. Waiting for processes to exit. Feb 12 19:49:15.705966 systemd-logind[1297]: Removed session 19. Feb 12 19:49:15.805561 systemd[1]: Started sshd@17-10.200.8.24:22-10.200.12.6:52718.service. Feb 12 19:49:16.422454 sshd[3886]: Accepted publickey for core from 10.200.12.6 port 52718 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:16.424137 sshd[3886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:16.429297 systemd-logind[1297]: New session 20 of user core. Feb 12 19:49:16.430048 systemd[1]: Started session-20.scope. Feb 12 19:49:17.096979 sshd[3886]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:17.100434 systemd[1]: sshd@17-10.200.8.24:22-10.200.12.6:52718.service: Deactivated successfully. Feb 12 19:49:17.101630 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 19:49:17.102185 systemd-logind[1297]: Session 20 logged out. Waiting for processes to exit. Feb 12 19:49:17.103110 systemd-logind[1297]: Removed session 20. Feb 12 19:49:17.203197 systemd[1]: Started sshd@18-10.200.8.24:22-10.200.12.6:40464.service. Feb 12 19:49:17.817924 sshd[3896]: Accepted publickey for core from 10.200.12.6 port 40464 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:17.819713 sshd[3896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:17.825485 systemd[1]: Started session-21.scope. Feb 12 19:49:17.826077 systemd-logind[1297]: New session 21 of user core. Feb 12 19:49:18.306055 sshd[3896]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:18.310021 systemd[1]: sshd@18-10.200.8.24:22-10.200.12.6:40464.service: Deactivated successfully. Feb 12 19:49:18.311177 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 19:49:18.312051 systemd-logind[1297]: Session 21 logged out. Waiting for processes to exit. Feb 12 19:49:18.313103 systemd-logind[1297]: Removed session 21. Feb 12 19:49:23.413317 systemd[1]: Started sshd@19-10.200.8.24:22-10.200.12.6:40466.service. Feb 12 19:49:24.029084 sshd[3913]: Accepted publickey for core from 10.200.12.6 port 40466 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:24.030573 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:24.036956 systemd[1]: Started session-22.scope. Feb 12 19:49:24.040449 systemd-logind[1297]: New session 22 of user core. Feb 12 19:49:24.518736 sshd[3913]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:24.522054 systemd[1]: sshd@19-10.200.8.24:22-10.200.12.6:40466.service: Deactivated successfully. Feb 12 19:49:24.523261 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 19:49:24.524173 systemd-logind[1297]: Session 22 logged out. Waiting for processes to exit. Feb 12 19:49:24.524989 systemd-logind[1297]: Removed session 22. Feb 12 19:49:29.624319 systemd[1]: Started sshd@20-10.200.8.24:22-10.200.12.6:36574.service. Feb 12 19:49:30.241613 sshd[3928]: Accepted publickey for core from 10.200.12.6 port 36574 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:30.243347 sshd[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:30.248033 systemd-logind[1297]: New session 23 of user core. Feb 12 19:49:30.250121 systemd[1]: Started session-23.scope. Feb 12 19:49:30.739385 sshd[3928]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:30.744347 systemd-logind[1297]: Session 23 logged out. Waiting for processes to exit. Feb 12 19:49:30.746390 systemd[1]: sshd@20-10.200.8.24:22-10.200.12.6:36574.service: Deactivated successfully. Feb 12 19:49:30.747401 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 19:49:30.749015 systemd-logind[1297]: Removed session 23. Feb 12 19:49:35.845143 systemd[1]: Started sshd@21-10.200.8.24:22-10.200.12.6:36590.service. Feb 12 19:49:36.471066 sshd[3945]: Accepted publickey for core from 10.200.12.6 port 36590 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:36.472779 sshd[3945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:36.478251 systemd[1]: Started session-24.scope. Feb 12 19:49:36.478729 systemd-logind[1297]: New session 24 of user core. Feb 12 19:49:36.970818 sshd[3945]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:36.974246 systemd[1]: sshd@21-10.200.8.24:22-10.200.12.6:36590.service: Deactivated successfully. Feb 12 19:49:36.975429 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 19:49:36.976373 systemd-logind[1297]: Session 24 logged out. Waiting for processes to exit. Feb 12 19:49:36.977415 systemd-logind[1297]: Removed session 24. Feb 12 19:49:37.075799 systemd[1]: Started sshd@22-10.200.8.24:22-10.200.12.6:59220.service. Feb 12 19:49:37.691987 sshd[3957]: Accepted publickey for core from 10.200.12.6 port 59220 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:37.693564 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:37.698577 systemd[1]: Started session-25.scope. Feb 12 19:49:37.699036 systemd-logind[1297]: New session 25 of user core. Feb 12 19:49:39.322483 env[1314]: time="2024-02-12T19:49:39.322438362Z" level=info msg="StopContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" with timeout 30 (s)" Feb 12 19:49:39.323524 env[1314]: time="2024-02-12T19:49:39.323488866Z" level=info msg="Stop container \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" with signal terminated" Feb 12 19:49:39.346971 systemd[1]: cri-containerd-7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4.scope: Deactivated successfully. Feb 12 19:49:39.366141 env[1314]: time="2024-02-12T19:49:39.366066843Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 19:49:39.377859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4-rootfs.mount: Deactivated successfully. Feb 12 19:49:39.379306 env[1314]: time="2024-02-12T19:49:39.379119098Z" level=info msg="StopContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" with timeout 2 (s)" Feb 12 19:49:39.379855 env[1314]: time="2024-02-12T19:49:39.379825700Z" level=info msg="Stop container \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" with signal terminated" Feb 12 19:49:39.389849 systemd-networkd[1448]: lxc_health: Link DOWN Feb 12 19:49:39.389857 systemd-networkd[1448]: lxc_health: Lost carrier Feb 12 19:49:39.410027 systemd[1]: cri-containerd-3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f.scope: Deactivated successfully. Feb 12 19:49:39.410304 systemd[1]: cri-containerd-3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f.scope: Consumed 7.219s CPU time. Feb 12 19:49:39.413322 env[1314]: time="2024-02-12T19:49:39.413279739Z" level=info msg="shim disconnected" id=7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4 Feb 12 19:49:39.413611 env[1314]: time="2024-02-12T19:49:39.413591841Z" level=warning msg="cleaning up after shim disconnected" id=7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4 namespace=k8s.io Feb 12 19:49:39.413740 env[1314]: time="2024-02-12T19:49:39.413723441Z" level=info msg="cleaning up dead shim" Feb 12 19:49:39.427767 env[1314]: time="2024-02-12T19:49:39.427742199Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4016 runtime=io.containerd.runc.v2\n" Feb 12 19:49:39.431661 env[1314]: time="2024-02-12T19:49:39.431633116Z" level=info msg="StopContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" returns successfully" Feb 12 19:49:39.432604 env[1314]: time="2024-02-12T19:49:39.432568320Z" level=info msg="StopPodSandbox for \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\"" Feb 12 19:49:39.432814 env[1314]: time="2024-02-12T19:49:39.432780420Z" level=info msg="Container to stop \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.435740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746-shm.mount: Deactivated successfully. Feb 12 19:49:39.445308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f-rootfs.mount: Deactivated successfully. Feb 12 19:49:39.445893 systemd[1]: cri-containerd-8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746.scope: Deactivated successfully. Feb 12 19:49:39.463869 env[1314]: time="2024-02-12T19:49:39.463823649Z" level=info msg="shim disconnected" id=3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f Feb 12 19:49:39.464156 env[1314]: time="2024-02-12T19:49:39.464138051Z" level=warning msg="cleaning up after shim disconnected" id=3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f namespace=k8s.io Feb 12 19:49:39.464255 env[1314]: time="2024-02-12T19:49:39.464242051Z" level=info msg="cleaning up dead shim" Feb 12 19:49:39.472131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746-rootfs.mount: Deactivated successfully. Feb 12 19:49:39.481670 env[1314]: time="2024-02-12T19:49:39.481642723Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4060 runtime=io.containerd.runc.v2\n" Feb 12 19:49:39.483631 env[1314]: time="2024-02-12T19:49:39.483601532Z" level=info msg="shim disconnected" id=8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746 Feb 12 19:49:39.484551 env[1314]: time="2024-02-12T19:49:39.484416035Z" level=warning msg="cleaning up after shim disconnected" id=8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746 namespace=k8s.io Feb 12 19:49:39.484551 env[1314]: time="2024-02-12T19:49:39.484435535Z" level=info msg="cleaning up dead shim" Feb 12 19:49:39.486136 env[1314]: time="2024-02-12T19:49:39.486100542Z" level=info msg="StopContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" returns successfully" Feb 12 19:49:39.486716 env[1314]: time="2024-02-12T19:49:39.486667344Z" level=info msg="StopPodSandbox for \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\"" Feb 12 19:49:39.486807 env[1314]: time="2024-02-12T19:49:39.486745845Z" level=info msg="Container to stop \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.486807 env[1314]: time="2024-02-12T19:49:39.486767845Z" level=info msg="Container to stop \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.486807 env[1314]: time="2024-02-12T19:49:39.486784245Z" level=info msg="Container to stop \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.486807 env[1314]: time="2024-02-12T19:49:39.486799245Z" level=info msg="Container to stop \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.487007 env[1314]: time="2024-02-12T19:49:39.486814345Z" level=info msg="Container to stop \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:39.492946 systemd[1]: cri-containerd-13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b.scope: Deactivated successfully. Feb 12 19:49:39.499785 env[1314]: time="2024-02-12T19:49:39.499755299Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4074 runtime=io.containerd.runc.v2\n" Feb 12 19:49:39.500473 env[1314]: time="2024-02-12T19:49:39.500441001Z" level=info msg="TearDown network for sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" successfully" Feb 12 19:49:39.500637 env[1314]: time="2024-02-12T19:49:39.500610602Z" level=info msg="StopPodSandbox for \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" returns successfully" Feb 12 19:49:39.534317 env[1314]: time="2024-02-12T19:49:39.534252142Z" level=info msg="shim disconnected" id=13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b Feb 12 19:49:39.534533 env[1314]: time="2024-02-12T19:49:39.534306442Z" level=warning msg="cleaning up after shim disconnected" id=13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b namespace=k8s.io Feb 12 19:49:39.534533 env[1314]: time="2024-02-12T19:49:39.534330342Z" level=info msg="cleaning up dead shim" Feb 12 19:49:39.542447 env[1314]: time="2024-02-12T19:49:39.542410976Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4106 runtime=io.containerd.runc.v2\n" Feb 12 19:49:39.542756 env[1314]: time="2024-02-12T19:49:39.542727177Z" level=info msg="TearDown network for sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" successfully" Feb 12 19:49:39.542842 env[1314]: time="2024-02-12T19:49:39.542756077Z" level=info msg="StopPodSandbox for \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" returns successfully" Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.692877 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-xtables-lock\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.692929 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-net\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.692975 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/243f6fa3-2f04-4547-82e7-36876982dd48-clustermesh-secrets\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.693005 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-run\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.693033 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-hostproc\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.696938 kubelet[2397]: I0212 19:49:39.693042 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693071 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path\") pod \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\" (UID: \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\") " Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693103 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-bpf-maps\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693136 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cni-path\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693166 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-cgroup\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693201 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.697598 kubelet[2397]: I0212 19:49:39.693234 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-kernel\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693269 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6thh\" (UniqueName: \"kubernetes.io/projected/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-kube-api-access-j6thh\") pod \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\" (UID: \"a3d8eaa0-70ca-4068-b415-dbf716ca7cd9\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693302 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-etc-cni-netd\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693337 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h98g\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-kube-api-access-2h98g\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693370 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-lib-modules\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693407 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-hubble-tls\") pod \"243f6fa3-2f04-4547-82e7-36876982dd48\" (UID: \"243f6fa3-2f04-4547-82e7-36876982dd48\") " Feb 12 19:49:39.698057 kubelet[2397]: I0212 19:49:39.693463 2397 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-xtables-lock\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.698310 kubelet[2397]: I0212 19:49:39.693573 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698310 kubelet[2397]: I0212 19:49:39.693606 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-hostproc" (OuterVolumeSpecName: "hostproc") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698310 kubelet[2397]: I0212 19:49:39.694833 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698310 kubelet[2397]: I0212 19:49:39.694896 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cni-path" (OuterVolumeSpecName: "cni-path") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698310 kubelet[2397]: I0212 19:49:39.694924 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698523 kubelet[2397]: I0212 19:49:39.698374 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:39.698523 kubelet[2397]: I0212 19:49:39.698441 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.698667 kubelet[2397]: I0212 19:49:39.698648 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.700637 kubelet[2397]: I0212 19:49:39.700609 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" (UID: "a3d8eaa0-70ca-4068-b415-dbf716ca7cd9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:39.701337 kubelet[2397]: I0212 19:49:39.701312 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.702415 kubelet[2397]: I0212 19:49:39.702390 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/243f6fa3-2f04-4547-82e7-36876982dd48-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:39.702543 kubelet[2397]: I0212 19:49:39.702454 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:39.705330 kubelet[2397]: I0212 19:49:39.705312 2397 scope.go:117] "RemoveContainer" containerID="3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f" Feb 12 19:49:39.707564 kubelet[2397]: I0212 19:49:39.707540 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-kube-api-access-j6thh" (OuterVolumeSpecName: "kube-api-access-j6thh") pod "a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" (UID: "a3d8eaa0-70ca-4068-b415-dbf716ca7cd9"). InnerVolumeSpecName "kube-api-access-j6thh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:39.710116 env[1314]: time="2024-02-12T19:49:39.710076172Z" level=info msg="RemoveContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\"" Feb 12 19:49:39.716030 kubelet[2397]: I0212 19:49:39.715986 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:39.718708 kubelet[2397]: I0212 19:49:39.717563 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-kube-api-access-2h98g" (OuterVolumeSpecName: "kube-api-access-2h98g") pod "243f6fa3-2f04-4547-82e7-36876982dd48" (UID: "243f6fa3-2f04-4547-82e7-36876982dd48"). InnerVolumeSpecName "kube-api-access-2h98g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:39.722718 env[1314]: time="2024-02-12T19:49:39.722641924Z" level=info msg="RemoveContainer for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" returns successfully" Feb 12 19:49:39.726632 kubelet[2397]: I0212 19:49:39.726549 2397 scope.go:117] "RemoveContainer" containerID="46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48" Feb 12 19:49:39.728417 env[1314]: time="2024-02-12T19:49:39.727729846Z" level=info msg="RemoveContainer for \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\"" Feb 12 19:49:39.728099 systemd[1]: Removed slice kubepods-besteffort-poda3d8eaa0_70ca_4068_b415_dbf716ca7cd9.slice. Feb 12 19:49:39.736766 env[1314]: time="2024-02-12T19:49:39.736736083Z" level=info msg="RemoveContainer for \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\" returns successfully" Feb 12 19:49:39.736960 kubelet[2397]: I0212 19:49:39.736940 2397 scope.go:117] "RemoveContainer" containerID="335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679" Feb 12 19:49:39.737968 env[1314]: time="2024-02-12T19:49:39.737941488Z" level=info msg="RemoveContainer for \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\"" Feb 12 19:49:39.745594 env[1314]: time="2024-02-12T19:49:39.745507519Z" level=info msg="RemoveContainer for \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\" returns successfully" Feb 12 19:49:39.746130 kubelet[2397]: I0212 19:49:39.746023 2397 scope.go:117] "RemoveContainer" containerID="445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee" Feb 12 19:49:39.747444 env[1314]: time="2024-02-12T19:49:39.747414727Z" level=info msg="RemoveContainer for \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\"" Feb 12 19:49:39.753588 env[1314]: time="2024-02-12T19:49:39.753553753Z" level=info msg="RemoveContainer for \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\" returns successfully" Feb 12 19:49:39.753814 kubelet[2397]: I0212 19:49:39.753778 2397 scope.go:117] "RemoveContainer" containerID="75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6" Feb 12 19:49:39.754894 env[1314]: time="2024-02-12T19:49:39.754846958Z" level=info msg="RemoveContainer for \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\"" Feb 12 19:49:39.763053 env[1314]: time="2024-02-12T19:49:39.763018692Z" level=info msg="RemoveContainer for \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\" returns successfully" Feb 12 19:49:39.763252 kubelet[2397]: I0212 19:49:39.763224 2397 scope.go:117] "RemoveContainer" containerID="3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f" Feb 12 19:49:39.763469 env[1314]: time="2024-02-12T19:49:39.763396594Z" level=error msg="ContainerStatus for \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\": not found" Feb 12 19:49:39.763612 kubelet[2397]: E0212 19:49:39.763592 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\": not found" containerID="3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f" Feb 12 19:49:39.763746 kubelet[2397]: I0212 19:49:39.763727 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f"} err="failed to get container status \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3443439b0e1b514a66a4992441408114f4f32df41a0101589c3d9ab7018d5a9f\": not found" Feb 12 19:49:39.763827 kubelet[2397]: I0212 19:49:39.763749 2397 scope.go:117] "RemoveContainer" containerID="46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48" Feb 12 19:49:39.763974 env[1314]: time="2024-02-12T19:49:39.763917996Z" level=error msg="ContainerStatus for \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\": not found" Feb 12 19:49:39.764095 kubelet[2397]: E0212 19:49:39.764076 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\": not found" containerID="46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48" Feb 12 19:49:39.764170 kubelet[2397]: I0212 19:49:39.764111 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48"} err="failed to get container status \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\": rpc error: code = NotFound desc = an error occurred when try to find container \"46fed81b611dcfce38638842d07c9ad4ba44ca1cad8f08203e424cebed81eb48\": not found" Feb 12 19:49:39.764170 kubelet[2397]: I0212 19:49:39.764124 2397 scope.go:117] "RemoveContainer" containerID="335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679" Feb 12 19:49:39.764376 env[1314]: time="2024-02-12T19:49:39.764326598Z" level=error msg="ContainerStatus for \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\": not found" Feb 12 19:49:39.764498 kubelet[2397]: E0212 19:49:39.764479 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\": not found" containerID="335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679" Feb 12 19:49:39.764571 kubelet[2397]: I0212 19:49:39.764514 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679"} err="failed to get container status \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\": rpc error: code = NotFound desc = an error occurred when try to find container \"335020dc675fbb0fb90fefd40a889fda6066d47dcb1d780ec378adf7be58c679\": not found" Feb 12 19:49:39.764571 kubelet[2397]: I0212 19:49:39.764526 2397 scope.go:117] "RemoveContainer" containerID="445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee" Feb 12 19:49:39.764762 env[1314]: time="2024-02-12T19:49:39.764701099Z" level=error msg="ContainerStatus for \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\": not found" Feb 12 19:49:39.764906 kubelet[2397]: E0212 19:49:39.764887 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\": not found" containerID="445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee" Feb 12 19:49:39.764988 kubelet[2397]: I0212 19:49:39.764917 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee"} err="failed to get container status \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"445a0bc0e194be60e949266ae5230a82dd97540557c2f6c71e93d370df26b4ee\": not found" Feb 12 19:49:39.764988 kubelet[2397]: I0212 19:49:39.764930 2397 scope.go:117] "RemoveContainer" containerID="75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6" Feb 12 19:49:39.765141 env[1314]: time="2024-02-12T19:49:39.765091801Z" level=error msg="ContainerStatus for \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\": not found" Feb 12 19:49:39.765263 kubelet[2397]: E0212 19:49:39.765245 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\": not found" containerID="75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6" Feb 12 19:49:39.765335 kubelet[2397]: I0212 19:49:39.765277 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6"} err="failed to get container status \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"75442138ca56287bfa78ab6c08ad0c648aeac4433d5c17a83032ec086556a7f6\": not found" Feb 12 19:49:39.765335 kubelet[2397]: I0212 19:49:39.765289 2397 scope.go:117] "RemoveContainer" containerID="7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4" Feb 12 19:49:39.766245 env[1314]: time="2024-02-12T19:49:39.766211605Z" level=info msg="RemoveContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\"" Feb 12 19:49:39.774280 env[1314]: time="2024-02-12T19:49:39.773949938Z" level=info msg="RemoveContainer for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" returns successfully" Feb 12 19:49:39.776325 kubelet[2397]: I0212 19:49:39.776306 2397 scope.go:117] "RemoveContainer" containerID="7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4" Feb 12 19:49:39.777543 env[1314]: time="2024-02-12T19:49:39.777482452Z" level=error msg="ContainerStatus for \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\": not found" Feb 12 19:49:39.777659 kubelet[2397]: E0212 19:49:39.777642 2397 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\": not found" containerID="7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4" Feb 12 19:49:39.777810 kubelet[2397]: I0212 19:49:39.777681 2397 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4"} err="failed to get container status \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7f9b9fa2de6ff50b8da7eabaed57e17b4f0eed954ba9ae7460def29e1c5cb1a4\": not found" Feb 12 19:49:39.794457 kubelet[2397]: I0212 19:49:39.794308 2397 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-bpf-maps\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794463 2397 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cni-path\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794479 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-cgroup\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794495 2397 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794509 2397 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-j6thh\" (UniqueName: \"kubernetes.io/projected/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-kube-api-access-j6thh\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794525 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-config-path\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794539 2397 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-etc-cni-netd\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794552 kubelet[2397]: I0212 19:49:39.794553 2397 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-lib-modules\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794567 2397 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-hubble-tls\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794580 2397 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2h98g\" (UniqueName: \"kubernetes.io/projected/243f6fa3-2f04-4547-82e7-36876982dd48-kube-api-access-2h98g\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794595 2397 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-host-proc-sys-net\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794612 2397 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/243f6fa3-2f04-4547-82e7-36876982dd48-clustermesh-secrets\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794625 2397 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-hostproc\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794640 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9-cilium-config-path\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:39.794814 kubelet[2397]: I0212 19:49:39.794654 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/243f6fa3-2f04-4547-82e7-36876982dd48-cilium-run\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:40.011219 systemd[1]: Removed slice kubepods-burstable-pod243f6fa3_2f04_4547_82e7_36876982dd48.slice. Feb 12 19:49:40.011369 systemd[1]: kubepods-burstable-pod243f6fa3_2f04_4547_82e7_36876982dd48.slice: Consumed 7.335s CPU time. Feb 12 19:49:40.334450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b-rootfs.mount: Deactivated successfully. Feb 12 19:49:40.334564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b-shm.mount: Deactivated successfully. Feb 12 19:49:40.334647 systemd[1]: var-lib-kubelet-pods-243f6fa3\x2d2f04\x2d4547\x2d82e7\x2d36876982dd48-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:40.334734 systemd[1]: var-lib-kubelet-pods-243f6fa3\x2d2f04\x2d4547\x2d82e7\x2d36876982dd48-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:40.334823 systemd[1]: var-lib-kubelet-pods-a3d8eaa0\x2d70ca\x2d4068\x2db415\x2ddbf716ca7cd9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6thh.mount: Deactivated successfully. Feb 12 19:49:40.334911 systemd[1]: var-lib-kubelet-pods-243f6fa3\x2d2f04\x2d4547\x2d82e7\x2d36876982dd48-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2h98g.mount: Deactivated successfully. Feb 12 19:49:41.198715 kubelet[2397]: I0212 19:49:41.198651 2397 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" path="/var/lib/kubelet/pods/243f6fa3-2f04-4547-82e7-36876982dd48/volumes" Feb 12 19:49:41.199566 kubelet[2397]: I0212 19:49:41.199532 2397 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" path="/var/lib/kubelet/pods/a3d8eaa0-70ca-4068-b415-dbf716ca7cd9/volumes" Feb 12 19:49:41.374886 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:41.379005 systemd[1]: sshd@22-10.200.8.24:22-10.200.12.6:59220.service: Deactivated successfully. Feb 12 19:49:41.380076 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 19:49:41.380953 systemd-logind[1297]: Session 25 logged out. Waiting for processes to exit. Feb 12 19:49:41.381951 systemd-logind[1297]: Removed session 25. Feb 12 19:49:41.479849 systemd[1]: Started sshd@23-10.200.8.24:22-10.200.12.6:59232.service. Feb 12 19:49:42.099682 sshd[4127]: Accepted publickey for core from 10.200.12.6 port 59232 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:42.101435 sshd[4127]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:42.106533 systemd[1]: Started session-26.scope. Feb 12 19:49:42.107173 systemd-logind[1297]: New session 26 of user core. Feb 12 19:49:42.286337 kubelet[2397]: E0212 19:49:42.286305 2397 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:43.010437 kubelet[2397]: I0212 19:49:43.010394 2397 topology_manager.go:215] "Topology Admit Handler" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" podNamespace="kube-system" podName="cilium-275qp" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010480 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="mount-cgroup" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010493 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="apply-sysctl-overwrites" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010504 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="clean-cilium-state" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010513 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" containerName="cilium-operator" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010532 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="mount-bpf-fs" Feb 12 19:49:43.010620 kubelet[2397]: E0212 19:49:43.010541 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="cilium-agent" Feb 12 19:49:43.010620 kubelet[2397]: I0212 19:49:43.010570 2397 memory_manager.go:346] "RemoveStaleState removing state" podUID="243f6fa3-2f04-4547-82e7-36876982dd48" containerName="cilium-agent" Feb 12 19:49:43.010620 kubelet[2397]: I0212 19:49:43.010579 2397 memory_manager.go:346] "RemoveStaleState removing state" podUID="a3d8eaa0-70ca-4068-b415-dbf716ca7cd9" containerName="cilium-operator" Feb 12 19:49:43.017719 systemd[1]: Created slice kubepods-burstable-pod7d849691_b14f_4c7d_b1fd_ab035e9b5b66.slice. Feb 12 19:49:43.133285 sshd[4127]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:43.136902 systemd[1]: sshd@23-10.200.8.24:22-10.200.12.6:59232.service: Deactivated successfully. Feb 12 19:49:43.137832 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 19:49:43.138539 systemd-logind[1297]: Session 26 logged out. Waiting for processes to exit. Feb 12 19:49:43.139458 systemd-logind[1297]: Removed session 26. Feb 12 19:49:43.209949 kubelet[2397]: I0212 19:49:43.209915 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-xtables-lock\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210185 kubelet[2397]: I0212 19:49:43.210158 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-cgroup\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210292 kubelet[2397]: I0212 19:49:43.210195 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cni-path\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210292 kubelet[2397]: I0212 19:49:43.210224 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-clustermesh-secrets\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210292 kubelet[2397]: I0212 19:49:43.210251 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-net\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210292 kubelet[2397]: I0212 19:49:43.210282 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7jc\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-kube-api-access-nm7jc\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210309 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hubble-tls\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210339 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-run\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210369 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-ipsec-secrets\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210399 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-bpf-maps\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210426 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hostproc\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210473 kubelet[2397]: I0212 19:49:43.210457 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-etc-cni-netd\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210733 kubelet[2397]: I0212 19:49:43.210489 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-lib-modules\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210733 kubelet[2397]: I0212 19:49:43.210523 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-config-path\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.210733 kubelet[2397]: I0212 19:49:43.210559 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-kernel\") pod \"cilium-275qp\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " pod="kube-system/cilium-275qp" Feb 12 19:49:43.239119 systemd[1]: Started sshd@24-10.200.8.24:22-10.200.12.6:59246.service. Feb 12 19:49:43.622375 env[1314]: time="2024-02-12T19:49:43.622305661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-275qp,Uid:7d849691-b14f-4c7d-b1fd-ab035e9b5b66,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:43.658635 env[1314]: time="2024-02-12T19:49:43.658558710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:43.658635 env[1314]: time="2024-02-12T19:49:43.658593811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:43.658635 env[1314]: time="2024-02-12T19:49:43.658608611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:43.659102 env[1314]: time="2024-02-12T19:49:43.659053913Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de pid=4152 runtime=io.containerd.runc.v2 Feb 12 19:49:43.671175 systemd[1]: Started cri-containerd-a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de.scope. Feb 12 19:49:43.698199 env[1314]: time="2024-02-12T19:49:43.698158374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-275qp,Uid:7d849691-b14f-4c7d-b1fd-ab035e9b5b66,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\"" Feb 12 19:49:43.701592 env[1314]: time="2024-02-12T19:49:43.701552888Z" level=info msg="CreateContainer within sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:43.739792 env[1314]: time="2024-02-12T19:49:43.739740545Z" level=info msg="CreateContainer within sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\"" Feb 12 19:49:43.740457 env[1314]: time="2024-02-12T19:49:43.740424848Z" level=info msg="StartContainer for \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\"" Feb 12 19:49:43.761245 systemd[1]: Started cri-containerd-bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c.scope. Feb 12 19:49:43.774001 systemd[1]: cri-containerd-bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c.scope: Deactivated successfully. Feb 12 19:49:43.852568 env[1314]: time="2024-02-12T19:49:43.852508610Z" level=info msg="shim disconnected" id=bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c Feb 12 19:49:43.852568 env[1314]: time="2024-02-12T19:49:43.852564311Z" level=warning msg="cleaning up after shim disconnected" id=bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c namespace=k8s.io Feb 12 19:49:43.852568 env[1314]: time="2024-02-12T19:49:43.852575611Z" level=info msg="cleaning up dead shim" Feb 12 19:49:43.858358 sshd[4139]: Accepted publickey for core from 10.200.12.6 port 59246 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:43.859239 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:43.863965 env[1314]: time="2024-02-12T19:49:43.863924058Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4210 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:49:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:49:43.864455 env[1314]: time="2024-02-12T19:49:43.864335459Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 19:49:43.865328 systemd[1]: Started session-27.scope. Feb 12 19:49:43.866080 systemd-logind[1297]: New session 27 of user core. Feb 12 19:49:43.868882 env[1314]: time="2024-02-12T19:49:43.868580077Z" level=error msg="Failed to pipe stdout of container \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\"" error="reading from a closed fifo" Feb 12 19:49:43.869282 env[1314]: time="2024-02-12T19:49:43.869051479Z" level=error msg="Failed to pipe stderr of container \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\"" error="reading from a closed fifo" Feb 12 19:49:43.875754 env[1314]: time="2024-02-12T19:49:43.875632606Z" level=error msg="StartContainer for \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:49:43.876666 kubelet[2397]: E0212 19:49:43.875908 2397 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c" Feb 12 19:49:43.876666 kubelet[2397]: E0212 19:49:43.876103 2397 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:49:43.876666 kubelet[2397]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:49:43.876666 kubelet[2397]: rm /hostbin/cilium-mount Feb 12 19:49:43.877172 kubelet[2397]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm7jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-275qp_kube-system(7d849691-b14f-4c7d-b1fd-ab035e9b5b66): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:49:43.877286 kubelet[2397]: E0212 19:49:43.876162 2397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-275qp" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" Feb 12 19:49:44.360023 sshd[4139]: pam_unix(sshd:session): session closed for user core Feb 12 19:49:44.363404 systemd[1]: sshd@24-10.200.8.24:22-10.200.12.6:59246.service: Deactivated successfully. Feb 12 19:49:44.364708 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 19:49:44.365580 systemd-logind[1297]: Session 27 logged out. Waiting for processes to exit. Feb 12 19:49:44.366620 systemd-logind[1297]: Removed session 27. Feb 12 19:49:44.465091 systemd[1]: Started sshd@25-10.200.8.24:22-10.200.12.6:59262.service. Feb 12 19:49:44.730592 env[1314]: time="2024-02-12T19:49:44.730469027Z" level=info msg="CreateContainer within sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 12 19:49:44.759949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293335119.mount: Deactivated successfully. Feb 12 19:49:44.767926 env[1314]: time="2024-02-12T19:49:44.767877781Z" level=info msg="CreateContainer within sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\"" Feb 12 19:49:44.768803 env[1314]: time="2024-02-12T19:49:44.768734084Z" level=info msg="StartContainer for \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\"" Feb 12 19:49:44.795659 systemd[1]: Started cri-containerd-0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08.scope. Feb 12 19:49:44.815023 systemd[1]: cri-containerd-0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08.scope: Deactivated successfully. Feb 12 19:49:44.832929 env[1314]: time="2024-02-12T19:49:44.832871648Z" level=info msg="shim disconnected" id=0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08 Feb 12 19:49:44.833159 env[1314]: time="2024-02-12T19:49:44.832930249Z" level=warning msg="cleaning up after shim disconnected" id=0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08 namespace=k8s.io Feb 12 19:49:44.833159 env[1314]: time="2024-02-12T19:49:44.832943949Z" level=info msg="cleaning up dead shim" Feb 12 19:49:44.840193 env[1314]: time="2024-02-12T19:49:44.840154078Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4258 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T19:49:44Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 19:49:44.840469 env[1314]: time="2024-02-12T19:49:44.840409979Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 19:49:44.840788 env[1314]: time="2024-02-12T19:49:44.840744081Z" level=error msg="Failed to pipe stdout of container \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\"" error="reading from a closed fifo" Feb 12 19:49:44.840862 env[1314]: time="2024-02-12T19:49:44.840747181Z" level=error msg="Failed to pipe stderr of container \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\"" error="reading from a closed fifo" Feb 12 19:49:44.845995 env[1314]: time="2024-02-12T19:49:44.845947202Z" level=error msg="StartContainer for \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 19:49:44.846235 kubelet[2397]: E0212 19:49:44.846212 2397 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08" Feb 12 19:49:44.846354 kubelet[2397]: E0212 19:49:44.846344 2397 kuberuntime_manager.go:1209] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 19:49:44.846354 kubelet[2397]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 19:49:44.846354 kubelet[2397]: rm /hostbin/cilium-mount Feb 12 19:49:44.846471 kubelet[2397]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nm7jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-275qp_kube-system(7d849691-b14f-4c7d-b1fd-ab035e9b5b66): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 19:49:44.846471 kubelet[2397]: E0212 19:49:44.846393 2397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-275qp" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" Feb 12 19:49:45.080827 sshd[4234]: Accepted publickey for core from 10.200.12.6 port 59262 ssh2: RSA SHA256:s7YymQosdnJ6BBn11oTaBnKtgbkZHlGvzOt+RffOmrs Feb 12 19:49:45.082338 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 19:49:45.087475 systemd[1]: Started session-28.scope. Feb 12 19:49:45.087972 systemd-logind[1297]: New session 28 of user core. Feb 12 19:49:45.321622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08-rootfs.mount: Deactivated successfully. Feb 12 19:49:45.731962 kubelet[2397]: I0212 19:49:45.731923 2397 scope.go:117] "RemoveContainer" containerID="bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c" Feb 12 19:49:45.732640 env[1314]: time="2024-02-12T19:49:45.732597248Z" level=info msg="StopPodSandbox for \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\"" Feb 12 19:49:45.733018 env[1314]: time="2024-02-12T19:49:45.732666048Z" level=info msg="Container to stop \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:45.733018 env[1314]: time="2024-02-12T19:49:45.732686048Z" level=info msg="Container to stop \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 19:49:45.735053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de-shm.mount: Deactivated successfully. Feb 12 19:49:45.738501 env[1314]: time="2024-02-12T19:49:45.738466772Z" level=info msg="RemoveContainer for \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\"" Feb 12 19:49:45.747161 env[1314]: time="2024-02-12T19:49:45.747129508Z" level=info msg="RemoveContainer for \"bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c\" returns successfully" Feb 12 19:49:45.753342 systemd[1]: cri-containerd-a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de.scope: Deactivated successfully. Feb 12 19:49:45.803955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de-rootfs.mount: Deactivated successfully. Feb 12 19:49:45.821314 env[1314]: time="2024-02-12T19:49:45.821245812Z" level=info msg="shim disconnected" id=a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de Feb 12 19:49:45.821314 env[1314]: time="2024-02-12T19:49:45.821309413Z" level=warning msg="cleaning up after shim disconnected" id=a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de namespace=k8s.io Feb 12 19:49:45.821314 env[1314]: time="2024-02-12T19:49:45.821321113Z" level=info msg="cleaning up dead shim" Feb 12 19:49:45.832682 env[1314]: time="2024-02-12T19:49:45.832625159Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4295 runtime=io.containerd.runc.v2\n" Feb 12 19:49:45.833012 env[1314]: time="2024-02-12T19:49:45.832978361Z" level=info msg="TearDown network for sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" successfully" Feb 12 19:49:45.833097 env[1314]: time="2024-02-12T19:49:45.833014961Z" level=info msg="StopPodSandbox for \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" returns successfully" Feb 12 19:49:46.025676 kubelet[2397]: I0212 19:49:46.025538 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-kernel\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025676 kubelet[2397]: I0212 19:49:46.025618 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-clustermesh-secrets\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025676 kubelet[2397]: I0212 19:49:46.025642 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-net\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025686 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-ipsec-secrets\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025737 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hostproc\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025767 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-config-path\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025803 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-xtables-lock\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025838 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-cgroup\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025869 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-run\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025892 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-etc-cni-netd\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025921 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hubble-tls\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025960 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-bpf-maps\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.025993 kubelet[2397]: I0212 19:49:46.025986 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cni-path\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.026381 kubelet[2397]: I0212 19:49:46.026010 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-lib-modules\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.026381 kubelet[2397]: I0212 19:49:46.026053 2397 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nm7jc\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-kube-api-access-nm7jc\") pod \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\" (UID: \"7d849691-b14f-4c7d-b1fd-ab035e9b5b66\") " Feb 12 19:49:46.026571 kubelet[2397]: I0212 19:49:46.026532 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.026703 kubelet[2397]: I0212 19:49:46.026675 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.026817 kubelet[2397]: I0212 19:49:46.026793 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.026886 kubelet[2397]: I0212 19:49:46.026849 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.026886 kubelet[2397]: I0212 19:49:46.026875 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.028609 kubelet[2397]: I0212 19:49:46.027170 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.028780 kubelet[2397]: I0212 19:49:46.027566 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hostproc" (OuterVolumeSpecName: "hostproc") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.029274 kubelet[2397]: I0212 19:49:46.029243 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.029374 kubelet[2397]: I0212 19:49:46.029299 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cni-path" (OuterVolumeSpecName: "cni-path") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.029374 kubelet[2397]: I0212 19:49:46.029328 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 19:49:46.031155 kubelet[2397]: I0212 19:49:46.031108 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 19:49:46.034204 systemd[1]: var-lib-kubelet-pods-7d849691\x2db14f\x2d4c7d\x2db1fd\x2dab035e9b5b66-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 19:49:46.036092 kubelet[2397]: I0212 19:49:46.036066 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:46.038205 systemd[1]: var-lib-kubelet-pods-7d849691\x2db14f\x2d4c7d\x2db1fd\x2dab035e9b5b66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnm7jc.mount: Deactivated successfully. Feb 12 19:49:46.038964 kubelet[2397]: I0212 19:49:46.038834 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-kube-api-access-nm7jc" (OuterVolumeSpecName: "kube-api-access-nm7jc") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "kube-api-access-nm7jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 19:49:46.042020 kubelet[2397]: I0212 19:49:46.041990 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:46.042239 kubelet[2397]: I0212 19:49:46.042217 2397 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7d849691-b14f-4c7d-b1fd-ab035e9b5b66" (UID: "7d849691-b14f-4c7d-b1fd-ab035e9b5b66"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126667 2397 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cni-path\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126739 2397 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-lib-modules\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126764 2397 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nm7jc\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-kube-api-access-nm7jc\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126787 2397 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126808 2397 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-clustermesh-secrets\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126827 2397 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-host-proc-sys-net\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126845 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126864 2397 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hostproc\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126882 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-config-path\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126900 2397 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-xtables-lock\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126917 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-cgroup\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126937 2397 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-cilium-run\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126954 2397 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-etc-cni-netd\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126971 2397 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-hubble-tls\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.127314 kubelet[2397]: I0212 19:49:46.126985 2397 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7d849691-b14f-4c7d-b1fd-ab035e9b5b66-bpf-maps\") on node \"ci-3510.3.2-a-e615f4b643\" DevicePath \"\"" Feb 12 19:49:46.321699 systemd[1]: var-lib-kubelet-pods-7d849691\x2db14f\x2d4c7d\x2db1fd\x2dab035e9b5b66-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:46.321833 systemd[1]: var-lib-kubelet-pods-7d849691\x2db14f\x2d4c7d\x2db1fd\x2dab035e9b5b66-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 19:49:46.737025 kubelet[2397]: I0212 19:49:46.736675 2397 scope.go:117] "RemoveContainer" containerID="0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08" Feb 12 19:49:46.741969 env[1314]: time="2024-02-12T19:49:46.741223789Z" level=info msg="RemoveContainer for \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\"" Feb 12 19:49:46.743267 systemd[1]: Removed slice kubepods-burstable-pod7d849691_b14f_4c7d_b1fd_ab035e9b5b66.slice. Feb 12 19:49:46.749504 env[1314]: time="2024-02-12T19:49:46.749397123Z" level=info msg="RemoveContainer for \"0c89b23f53d78d7d88879e4fe969da636228b62b78a5963784e1df9436915e08\" returns successfully" Feb 12 19:49:46.857322 kubelet[2397]: I0212 19:49:46.857285 2397 topology_manager.go:215] "Topology Admit Handler" podUID="91310292-50f3-42ef-8d25-eacbdf880207" podNamespace="kube-system" podName="cilium-vcdnb" Feb 12 19:49:46.857666 kubelet[2397]: E0212 19:49:46.857648 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" containerName="mount-cgroup" Feb 12 19:49:46.857824 kubelet[2397]: I0212 19:49:46.857812 2397 memory_manager.go:346] "RemoveStaleState removing state" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" containerName="mount-cgroup" Feb 12 19:49:46.857924 kubelet[2397]: I0212 19:49:46.857914 2397 memory_manager.go:346] "RemoveStaleState removing state" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" containerName="mount-cgroup" Feb 12 19:49:46.858032 kubelet[2397]: E0212 19:49:46.858022 2397 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" containerName="mount-cgroup" Feb 12 19:49:46.864673 systemd[1]: Created slice kubepods-burstable-pod91310292_50f3_42ef_8d25_eacbdf880207.slice. Feb 12 19:49:46.957594 kubelet[2397]: W0212 19:49:46.957541 2397 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d849691_b14f_4c7d_b1fd_ab035e9b5b66.slice/cri-containerd-bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c.scope WatchSource:0}: container "bd3e332315c94fa6aa19fdf0a75a9d0d725bfb0e0961ae4194ba72784e961d6c" in namespace "k8s.io": not found Feb 12 19:49:47.031484 kubelet[2397]: I0212 19:49:47.031361 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/91310292-50f3-42ef-8d25-eacbdf880207-clustermesh-secrets\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.031780 kubelet[2397]: I0212 19:49:47.031759 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-bpf-maps\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.031941 kubelet[2397]: I0212 19:49:47.031928 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-xtables-lock\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032065 kubelet[2397]: I0212 19:49:47.032054 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-host-proc-sys-net\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032184 kubelet[2397]: I0212 19:49:47.032174 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-etc-cni-netd\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032477 kubelet[2397]: I0212 19:49:47.032340 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-lib-modules\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032477 kubelet[2397]: I0212 19:49:47.032387 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skmjk\" (UniqueName: \"kubernetes.io/projected/91310292-50f3-42ef-8d25-eacbdf880207-kube-api-access-skmjk\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032721 kubelet[2397]: I0212 19:49:47.032534 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-hostproc\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032721 kubelet[2397]: I0212 19:49:47.032598 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-host-proc-sys-kernel\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032721 kubelet[2397]: I0212 19:49:47.032674 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/91310292-50f3-42ef-8d25-eacbdf880207-cilium-config-path\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032928 kubelet[2397]: I0212 19:49:47.032747 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/91310292-50f3-42ef-8d25-eacbdf880207-hubble-tls\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032928 kubelet[2397]: I0212 19:49:47.032842 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-cilium-cgroup\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.032928 kubelet[2397]: I0212 19:49:47.032910 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-cni-path\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.033105 kubelet[2397]: I0212 19:49:47.032963 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/91310292-50f3-42ef-8d25-eacbdf880207-cilium-run\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.033105 kubelet[2397]: I0212 19:49:47.033010 2397 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/91310292-50f3-42ef-8d25-eacbdf880207-cilium-ipsec-secrets\") pod \"cilium-vcdnb\" (UID: \"91310292-50f3-42ef-8d25-eacbdf880207\") " pod="kube-system/cilium-vcdnb" Feb 12 19:49:47.168602 env[1314]: time="2024-02-12T19:49:47.168211040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcdnb,Uid:91310292-50f3-42ef-8d25-eacbdf880207,Namespace:kube-system,Attempt:0,}" Feb 12 19:49:47.198914 kubelet[2397]: I0212 19:49:47.198881 2397 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d849691-b14f-4c7d-b1fd-ab035e9b5b66" path="/var/lib/kubelet/pods/7d849691-b14f-4c7d-b1fd-ab035e9b5b66/volumes" Feb 12 19:49:47.200309 env[1314]: time="2024-02-12T19:49:47.200239772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 19:49:47.200309 env[1314]: time="2024-02-12T19:49:47.200281072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 19:49:47.200501 env[1314]: time="2024-02-12T19:49:47.200294772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 19:49:47.200751 env[1314]: time="2024-02-12T19:49:47.200683773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8 pid=4322 runtime=io.containerd.runc.v2 Feb 12 19:49:47.213006 systemd[1]: Started cri-containerd-885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8.scope. Feb 12 19:49:47.238936 env[1314]: time="2024-02-12T19:49:47.238884530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vcdnb,Uid:91310292-50f3-42ef-8d25-eacbdf880207,Namespace:kube-system,Attempt:0,} returns sandbox id \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\"" Feb 12 19:49:47.242172 env[1314]: time="2024-02-12T19:49:47.242136143Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 19:49:47.267143 env[1314]: time="2024-02-12T19:49:47.267107246Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b\"" Feb 12 19:49:47.268757 env[1314]: time="2024-02-12T19:49:47.267721248Z" level=info msg="StartContainer for \"194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b\"" Feb 12 19:49:47.283732 systemd[1]: Started cri-containerd-194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b.scope. Feb 12 19:49:47.287607 kubelet[2397]: E0212 19:49:47.287561 2397 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 19:49:47.323716 env[1314]: time="2024-02-12T19:49:47.322867574Z" level=info msg="StartContainer for \"194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b\" returns successfully" Feb 12 19:49:47.330075 systemd[1]: cri-containerd-194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b.scope: Deactivated successfully. Feb 12 19:49:47.356497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b-rootfs.mount: Deactivated successfully. Feb 12 19:49:47.413845 env[1314]: time="2024-02-12T19:49:47.413769147Z" level=info msg="shim disconnected" id=194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b Feb 12 19:49:47.414138 env[1314]: time="2024-02-12T19:49:47.414105548Z" level=warning msg="cleaning up after shim disconnected" id=194f5a71be8e20dddc5d4444b53b33be1723f13381c318696088c21037e1ea7b namespace=k8s.io Feb 12 19:49:47.414138 env[1314]: time="2024-02-12T19:49:47.414129948Z" level=info msg="cleaning up dead shim" Feb 12 19:49:47.422393 env[1314]: time="2024-02-12T19:49:47.422355382Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4401 runtime=io.containerd.runc.v2\n" Feb 12 19:49:47.756269 env[1314]: time="2024-02-12T19:49:47.756222750Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 19:49:47.779630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574826161.mount: Deactivated successfully. Feb 12 19:49:47.788864 env[1314]: time="2024-02-12T19:49:47.788803583Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462\"" Feb 12 19:49:47.789645 env[1314]: time="2024-02-12T19:49:47.789429986Z" level=info msg="StartContainer for \"beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462\"" Feb 12 19:49:47.806222 systemd[1]: Started cri-containerd-beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462.scope. Feb 12 19:49:47.837583 env[1314]: time="2024-02-12T19:49:47.837541683Z" level=info msg="StartContainer for \"beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462\" returns successfully" Feb 12 19:49:47.844079 systemd[1]: cri-containerd-beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462.scope: Deactivated successfully. Feb 12 19:49:47.871448 env[1314]: time="2024-02-12T19:49:47.871399222Z" level=info msg="shim disconnected" id=beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462 Feb 12 19:49:47.871448 env[1314]: time="2024-02-12T19:49:47.871446422Z" level=warning msg="cleaning up after shim disconnected" id=beb951a2a6d09c8464d7611831eccbe0e096830bf5ce1d669b59410498f31462 namespace=k8s.io Feb 12 19:49:47.871747 env[1314]: time="2024-02-12T19:49:47.871457522Z" level=info msg="cleaning up dead shim" Feb 12 19:49:47.879640 env[1314]: time="2024-02-12T19:49:47.879604455Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4468 runtime=io.containerd.runc.v2\n" Feb 12 19:49:48.758417 env[1314]: time="2024-02-12T19:49:48.758372551Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 19:49:48.796718 env[1314]: time="2024-02-12T19:49:48.796643007Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48\"" Feb 12 19:49:48.797444 env[1314]: time="2024-02-12T19:49:48.797393410Z" level=info msg="StartContainer for \"3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48\"" Feb 12 19:49:48.823942 systemd[1]: Started cri-containerd-3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48.scope. Feb 12 19:49:48.860537 systemd[1]: cri-containerd-3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48.scope: Deactivated successfully. Feb 12 19:49:48.861487 env[1314]: time="2024-02-12T19:49:48.861450072Z" level=info msg="StartContainer for \"3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48\" returns successfully" Feb 12 19:49:48.889371 env[1314]: time="2024-02-12T19:49:48.889318286Z" level=info msg="shim disconnected" id=3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48 Feb 12 19:49:48.889371 env[1314]: time="2024-02-12T19:49:48.889367087Z" level=warning msg="cleaning up after shim disconnected" id=3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48 namespace=k8s.io Feb 12 19:49:48.889673 env[1314]: time="2024-02-12T19:49:48.889379087Z" level=info msg="cleaning up dead shim" Feb 12 19:49:48.896542 env[1314]: time="2024-02-12T19:49:48.896504316Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4525 runtime=io.containerd.runc.v2\n" Feb 12 19:49:49.321529 systemd[1]: run-containerd-runc-k8s.io-3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48-runc.V4B0hE.mount: Deactivated successfully. Feb 12 19:49:49.321652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3277d10eb02aad46bfc48cbff8c6f8ff96959a9c5ab163dff8dc78f6838b7f48-rootfs.mount: Deactivated successfully. Feb 12 19:49:49.762909 env[1314]: time="2024-02-12T19:49:49.762789954Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 19:49:49.800032 env[1314]: time="2024-02-12T19:49:49.799989006Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79\"" Feb 12 19:49:49.800779 env[1314]: time="2024-02-12T19:49:49.800744709Z" level=info msg="StartContainer for \"7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79\"" Feb 12 19:49:49.828146 systemd[1]: Started cri-containerd-7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79.scope. Feb 12 19:49:49.854304 systemd[1]: cri-containerd-7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79.scope: Deactivated successfully. Feb 12 19:49:49.860097 env[1314]: time="2024-02-12T19:49:49.860048352Z" level=info msg="StartContainer for \"7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79\" returns successfully" Feb 12 19:49:49.894538 env[1314]: time="2024-02-12T19:49:49.894485392Z" level=info msg="shim disconnected" id=7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79 Feb 12 19:49:49.894538 env[1314]: time="2024-02-12T19:49:49.894535392Z" level=warning msg="cleaning up after shim disconnected" id=7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79 namespace=k8s.io Feb 12 19:49:49.894873 env[1314]: time="2024-02-12T19:49:49.894546293Z" level=info msg="cleaning up dead shim" Feb 12 19:49:49.902065 env[1314]: time="2024-02-12T19:49:49.902026923Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:49:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4578 runtime=io.containerd.runc.v2\n" Feb 12 19:49:50.321876 systemd[1]: run-containerd-runc-k8s.io-7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79-runc.X4pkd3.mount: Deactivated successfully. Feb 12 19:49:50.322020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e573b5d83badaa0cb194f632c2ea104a8c81d7354e925f8cd094e11af366f79-rootfs.mount: Deactivated successfully. Feb 12 19:49:50.768619 env[1314]: time="2024-02-12T19:49:50.768496757Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 19:49:50.817156 env[1314]: time="2024-02-12T19:49:50.817091955Z" level=info msg="CreateContainer within sandbox \"885f112342592e3c6decab3fbc9fada7cf6460e0390d2fd65e698bc9c5e4b4a8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3\"" Feb 12 19:49:50.817854 env[1314]: time="2024-02-12T19:49:50.817809958Z" level=info msg="StartContainer for \"b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3\"" Feb 12 19:49:50.849402 systemd[1]: Started cri-containerd-b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3.scope. Feb 12 19:49:50.882018 env[1314]: time="2024-02-12T19:49:50.881967620Z" level=info msg="StartContainer for \"b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3\" returns successfully" Feb 12 19:49:51.218726 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 19:49:53.653296 systemd[1]: run-containerd-runc-k8s.io-b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3-runc.Y5F4qM.mount: Deactivated successfully. Feb 12 19:49:53.815067 systemd-networkd[1448]: lxc_health: Link UP Feb 12 19:49:53.826509 systemd-networkd[1448]: lxc_health: Gained carrier Feb 12 19:49:53.826845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 19:49:55.004885 systemd-networkd[1448]: lxc_health: Gained IPv6LL Feb 12 19:49:55.193137 kubelet[2397]: I0212 19:49:55.193093 2397 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vcdnb" podStartSLOduration=9.193052031 podCreationTimestamp="2024-02-12 19:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 19:49:51.793992834 +0000 UTC m=+234.942115643" watchObservedRunningTime="2024-02-12 19:49:55.193052031 +0000 UTC m=+238.341174940" Feb 12 19:49:55.855874 systemd[1]: run-containerd-runc-k8s.io-b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3-runc.uTah3o.mount: Deactivated successfully. Feb 12 19:49:57.199031 env[1314]: time="2024-02-12T19:49:57.198790641Z" level=info msg="StopPodSandbox for \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\"" Feb 12 19:49:57.199031 env[1314]: time="2024-02-12T19:49:57.198918941Z" level=info msg="TearDown network for sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" successfully" Feb 12 19:49:57.199031 env[1314]: time="2024-02-12T19:49:57.198961741Z" level=info msg="StopPodSandbox for \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" returns successfully" Feb 12 19:49:57.201739 env[1314]: time="2024-02-12T19:49:57.200243646Z" level=info msg="RemovePodSandbox for \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\"" Feb 12 19:49:57.201739 env[1314]: time="2024-02-12T19:49:57.200282647Z" level=info msg="Forcibly stopping sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\"" Feb 12 19:49:57.201739 env[1314]: time="2024-02-12T19:49:57.200370447Z" level=info msg="TearDown network for sandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" successfully" Feb 12 19:49:57.212723 env[1314]: time="2024-02-12T19:49:57.209670785Z" level=info msg="RemovePodSandbox \"8566e887dc2dc01f39eb15b295494c886bd1eb3ad3de823cb40ca48ac6524746\" returns successfully" Feb 12 19:49:57.216430 env[1314]: time="2024-02-12T19:49:57.216400412Z" level=info msg="StopPodSandbox for \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\"" Feb 12 19:49:57.216537 env[1314]: time="2024-02-12T19:49:57.216485912Z" level=info msg="TearDown network for sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" successfully" Feb 12 19:49:57.216537 env[1314]: time="2024-02-12T19:49:57.216529712Z" level=info msg="StopPodSandbox for \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" returns successfully" Feb 12 19:49:57.216875 env[1314]: time="2024-02-12T19:49:57.216839513Z" level=info msg="RemovePodSandbox for \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\"" Feb 12 19:49:57.216968 env[1314]: time="2024-02-12T19:49:57.216874314Z" level=info msg="Forcibly stopping sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\"" Feb 12 19:49:57.216968 env[1314]: time="2024-02-12T19:49:57.216952914Z" level=info msg="TearDown network for sandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" successfully" Feb 12 19:49:57.225118 env[1314]: time="2024-02-12T19:49:57.225065947Z" level=info msg="RemovePodSandbox \"a8b34579573fbee060d35126bee277db0b6304215436265394eb559b7f6266de\" returns successfully" Feb 12 19:49:57.225417 env[1314]: time="2024-02-12T19:49:57.225390348Z" level=info msg="StopPodSandbox for \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\"" Feb 12 19:49:57.225534 env[1314]: time="2024-02-12T19:49:57.225491648Z" level=info msg="TearDown network for sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" successfully" Feb 12 19:49:57.225603 env[1314]: time="2024-02-12T19:49:57.225536349Z" level=info msg="StopPodSandbox for \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" returns successfully" Feb 12 19:49:57.225861 env[1314]: time="2024-02-12T19:49:57.225837050Z" level=info msg="RemovePodSandbox for \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\"" Feb 12 19:49:57.225949 env[1314]: time="2024-02-12T19:49:57.225866850Z" level=info msg="Forcibly stopping sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\"" Feb 12 19:49:57.226007 env[1314]: time="2024-02-12T19:49:57.225954550Z" level=info msg="TearDown network for sandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" successfully" Feb 12 19:49:57.234200 env[1314]: time="2024-02-12T19:49:57.234170383Z" level=info msg="RemovePodSandbox \"13736367e903a80980ff9cf37a2e6920f102eb9b2b8e99ab4bb0b5f47c5d601b\" returns successfully" Feb 12 19:49:58.068988 systemd[1]: run-containerd-runc-k8s.io-b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3-runc.oGa2XP.mount: Deactivated successfully. Feb 12 19:50:00.314313 systemd[1]: run-containerd-runc-k8s.io-b4cb0d189c283d3f9157f296ed30fc8ae63b82bfc01c3c735b01cc7544ba22b3-runc.ie4PCQ.mount: Deactivated successfully. Feb 12 19:50:00.462913 sshd[4234]: pam_unix(sshd:session): session closed for user core Feb 12 19:50:00.466627 systemd[1]: sshd@25-10.200.8.24:22-10.200.12.6:59262.service: Deactivated successfully. Feb 12 19:50:00.467736 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 19:50:00.468595 systemd-logind[1297]: Session 28 logged out. Waiting for processes to exit. Feb 12 19:50:00.469741 systemd-logind[1297]: Removed session 28. Feb 12 19:50:14.243427 systemd[1]: cri-containerd-efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec.scope: Deactivated successfully. Feb 12 19:50:14.243755 systemd[1]: cri-containerd-efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec.scope: Consumed 3.309s CPU time. Feb 12 19:50:14.264293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec-rootfs.mount: Deactivated successfully. Feb 12 19:50:14.312205 env[1314]: time="2024-02-12T19:50:14.312147867Z" level=info msg="shim disconnected" id=efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec Feb 12 19:50:14.312205 env[1314]: time="2024-02-12T19:50:14.312200667Z" level=warning msg="cleaning up after shim disconnected" id=efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec namespace=k8s.io Feb 12 19:50:14.312205 env[1314]: time="2024-02-12T19:50:14.312214867Z" level=info msg="cleaning up dead shim" Feb 12 19:50:14.321570 env[1314]: time="2024-02-12T19:50:14.321504504Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:50:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5275 runtime=io.containerd.runc.v2\n" Feb 12 19:50:14.825174 kubelet[2397]: I0212 19:50:14.825141 2397 scope.go:117] "RemoveContainer" containerID="efd492eb27aa280e7e393c21e320d75aee6847b2742ebd33793080ee5e32c1ec" Feb 12 19:50:14.827553 env[1314]: time="2024-02-12T19:50:14.827489703Z" level=info msg="CreateContainer within sandbox \"a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 19:50:14.859070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963171056.mount: Deactivated successfully. Feb 12 19:50:14.869263 env[1314]: time="2024-02-12T19:50:14.869221968Z" level=info msg="CreateContainer within sandbox \"a837b7b98938f263ada7f61b4c97aa788b25b9a702119875cedd929f788d29ce\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"dc3431360499df621b2ac819c146c9a14f67c3aada32ec53aea2790671b7f1ec\"" Feb 12 19:50:14.869773 env[1314]: time="2024-02-12T19:50:14.869728670Z" level=info msg="StartContainer for \"dc3431360499df621b2ac819c146c9a14f67c3aada32ec53aea2790671b7f1ec\"" Feb 12 19:50:14.890504 systemd[1]: Started cri-containerd-dc3431360499df621b2ac819c146c9a14f67c3aada32ec53aea2790671b7f1ec.scope. Feb 12 19:50:14.943984 env[1314]: time="2024-02-12T19:50:14.943926663Z" level=info msg="StartContainer for \"dc3431360499df621b2ac819c146c9a14f67c3aada32ec53aea2790671b7f1ec\" returns successfully" Feb 12 19:50:19.091614 systemd[1]: cri-containerd-aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c.scope: Deactivated successfully. Feb 12 19:50:19.091970 systemd[1]: cri-containerd-aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c.scope: Consumed 1.778s CPU time. Feb 12 19:50:19.113382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c-rootfs.mount: Deactivated successfully. Feb 12 19:50:19.143189 env[1314]: time="2024-02-12T19:50:19.143145208Z" level=info msg="shim disconnected" id=aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c Feb 12 19:50:19.143708 env[1314]: time="2024-02-12T19:50:19.143190808Z" level=warning msg="cleaning up after shim disconnected" id=aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c namespace=k8s.io Feb 12 19:50:19.143708 env[1314]: time="2024-02-12T19:50:19.143202908Z" level=info msg="cleaning up dead shim" Feb 12 19:50:19.150110 env[1314]: time="2024-02-12T19:50:19.150078635Z" level=warning msg="cleanup warnings time=\"2024-02-12T19:50:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5336 runtime=io.containerd.runc.v2\n" Feb 12 19:50:19.200898 kubelet[2397]: E0212 19:50:19.199002 2397 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-e615f4b643.17b33568fd491bf9", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-e615f4b643", UID:"a5b62034a604b7da83ebedadf8c27328", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e615f4b643"}, FirstTimestamp:time.Date(2024, time.February, 12, 19, 50, 8, 745847801, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 19, 50, 8, 745847801, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-e615f4b643"}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.24:60860->10.200.8.16:2379: read: connection timed out' (will not retry!) Feb 12 19:50:19.327379 kubelet[2397]: E0212 19:50:19.327339 2397 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.24:32816->10.200.8.16:2379: read: connection timed out" Feb 12 19:50:19.840039 kubelet[2397]: I0212 19:50:19.840001 2397 scope.go:117] "RemoveContainer" containerID="aaef3e27fddf5af154bea775d1e7c904f85a675af15935f7ba12c22b3b34099c" Feb 12 19:50:19.842453 env[1314]: time="2024-02-12T19:50:19.842401056Z" level=info msg="CreateContainer within sandbox \"86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 19:50:19.869883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931142253.mount: Deactivated successfully. Feb 12 19:50:19.878306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1949364753.mount: Deactivated successfully. Feb 12 19:50:19.888279 env[1314]: time="2024-02-12T19:50:19.888203636Z" level=info msg="CreateContainer within sandbox \"86f196b7ec1d7eccf994cbc5bb23f07cb276356f861c35db0b454f637934c79a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aed162021264ec6834c3c263e6e53e5d706e58c2fb29a4ca0f12bb9b524d2832\"" Feb 12 19:50:19.888742 env[1314]: time="2024-02-12T19:50:19.888677238Z" level=info msg="StartContainer for \"aed162021264ec6834c3c263e6e53e5d706e58c2fb29a4ca0f12bb9b524d2832\"" Feb 12 19:50:19.909522 systemd[1]: Started cri-containerd-aed162021264ec6834c3c263e6e53e5d706e58c2fb29a4ca0f12bb9b524d2832.scope. Feb 12 19:50:19.957375 env[1314]: time="2024-02-12T19:50:19.957329807Z" level=info msg="StartContainer for \"aed162021264ec6834c3c263e6e53e5d706e58c2fb29a4ca0f12bb9b524d2832\" returns successfully" Feb 12 19:50:25.087465 kubelet[2397]: I0212 19:50:25.087420 2397 status_manager.go:853] "Failed to get status for pod" podUID="a5b62034a604b7da83ebedadf8c27328" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e615f4b643" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.24:60974->10.200.8.16:2379: read: connection timed out" Feb 12 19:50:29.328099 kubelet[2397]: E0212 19:50:29.328053 2397 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e615f4b643?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"