Feb 9 19:01:09.090862 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:01:09.090897 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:01:09.090913 kernel: BIOS-provided physical RAM map: Feb 9 19:01:09.090923 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 9 19:01:09.090933 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 9 19:01:09.090944 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 9 19:01:09.090960 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 9 19:01:09.090972 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 9 19:01:09.090983 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 9 19:01:09.090994 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 9 19:01:09.091005 kernel: NX (Execute Disable) protection: active Feb 9 19:01:09.091015 kernel: SMBIOS 2.7 present. Feb 9 19:01:09.091027 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 9 19:01:09.091038 kernel: Hypervisor detected: KVM Feb 9 19:01:09.091057 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:01:09.091070 kernel: kvm-clock: cpu 0, msr 36faa001, primary cpu clock Feb 9 19:01:09.091082 kernel: kvm-clock: using sched offset of 7603775561 cycles Feb 9 19:01:09.091096 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:01:09.091109 kernel: tsc: Detected 2499.996 MHz processor Feb 9 19:01:09.091122 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:01:09.091137 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:01:09.091150 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 9 19:01:09.091162 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:01:09.091175 kernel: Using GB pages for direct mapping Feb 9 19:01:09.091188 kernel: ACPI: Early table checksum verification disabled Feb 9 19:01:09.091200 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 9 19:01:09.091213 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 9 19:01:09.091226 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:01:09.091239 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 9 19:01:09.091254 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 9 19:01:09.091267 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 19:01:09.091280 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:01:09.091293 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 9 19:01:09.091306 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:01:09.091319 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 9 19:01:09.091332 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 9 19:01:09.091345 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 9 19:01:09.091380 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 9 19:01:09.091393 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 9 19:01:09.091407 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 9 19:01:09.091425 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 9 19:01:09.091439 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 9 19:01:09.091454 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 9 19:01:09.091468 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 9 19:01:09.091485 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 9 19:01:09.091499 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 9 19:01:09.091513 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 9 19:01:09.091527 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:01:09.091541 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:01:09.091554 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 9 19:01:09.091733 kernel: NUMA: Initialized distance table, cnt=1 Feb 9 19:01:09.091753 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 9 19:01:09.091775 kernel: Zone ranges: Feb 9 19:01:09.091789 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:01:09.091803 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 9 19:01:09.091817 kernel: Normal empty Feb 9 19:01:09.091830 kernel: Movable zone start for each node Feb 9 19:01:09.091842 kernel: Early memory node ranges Feb 9 19:01:09.091854 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 9 19:01:09.091867 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 9 19:01:09.091881 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 9 19:01:09.091897 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:01:09.091911 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 9 19:01:09.091931 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 9 19:01:09.091944 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:01:09.091958 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:01:09.091972 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 9 19:01:09.091986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:01:09.092001 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:01:09.092014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:01:09.092031 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:01:09.092044 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:01:09.092057 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:01:09.092071 kernel: TSC deadline timer available Feb 9 19:01:09.092085 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:01:09.092098 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 9 19:01:09.092111 kernel: Booting paravirtualized kernel on KVM Feb 9 19:01:09.092124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:01:09.092137 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:01:09.092155 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:01:09.092168 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:01:09.092182 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:01:09.092201 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 9 19:01:09.092214 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:01:09.092229 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:01:09.092242 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 9 19:01:09.092255 kernel: Policy zone: DMA32 Feb 9 19:01:09.092271 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:01:09.092288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:01:09.092301 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:01:09.092314 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:01:09.092326 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:01:09.092340 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 9 19:01:09.092372 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:01:09.092385 kernel: Kernel/User page tables isolation: enabled Feb 9 19:01:09.092398 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:01:09.092414 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:01:09.092578 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:01:09.092607 kernel: rcu: RCU event tracing is enabled. Feb 9 19:01:09.092621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:01:09.092634 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:01:09.092647 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:01:09.092661 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:01:09.092674 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:01:09.092688 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 9 19:01:09.092705 kernel: random: crng init done Feb 9 19:01:09.092719 kernel: Console: colour VGA+ 80x25 Feb 9 19:01:09.092733 kernel: printk: console [ttyS0] enabled Feb 9 19:01:09.092745 kernel: ACPI: Core revision 20210730 Feb 9 19:01:09.092758 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 9 19:01:09.092772 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:01:09.092786 kernel: x2apic enabled Feb 9 19:01:09.092800 kernel: Switched APIC routing to physical x2apic. Feb 9 19:01:09.092813 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 9 19:01:09.092831 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Feb 9 19:01:09.092844 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:01:09.092858 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:01:09.092872 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:01:09.092894 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:01:09.092913 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:01:09.092928 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:01:09.092943 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:01:09.092956 kernel: RETBleed: Vulnerable Feb 9 19:01:09.092969 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:01:09.092983 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:01:09.092996 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:01:09.093010 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:01:09.093023 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:01:09.093041 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:01:09.093056 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:01:09.093072 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 9 19:01:09.093087 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 9 19:01:09.093102 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:01:09.093118 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:01:09.093134 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:01:09.093150 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 9 19:01:09.093171 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:01:09.093187 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 9 19:01:09.093202 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 9 19:01:09.093218 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 9 19:01:09.093234 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 9 19:01:09.093248 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 9 19:01:09.093516 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 9 19:01:09.093536 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 9 19:01:09.093549 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:01:09.093711 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:01:09.093733 kernel: LSM: Security Framework initializing Feb 9 19:01:09.093749 kernel: SELinux: Initializing. Feb 9 19:01:09.093765 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:01:09.093811 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 9 19:01:09.093824 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:01:09.093837 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:01:09.093852 kernel: signal: max sigframe size: 3632 Feb 9 19:01:09.093895 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:01:09.093911 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:01:09.093930 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:01:09.093945 kernel: x86: Booting SMP configuration: Feb 9 19:01:09.093988 kernel: .... node #0, CPUs: #1 Feb 9 19:01:09.094003 kernel: kvm-clock: cpu 1, msr 36faa041, secondary cpu clock Feb 9 19:01:09.094018 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 9 19:01:09.094033 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 9 19:01:09.094074 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:01:09.094190 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:01:09.094205 kernel: smpboot: Max logical packages: 1 Feb 9 19:01:09.094225 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Feb 9 19:01:09.094241 kernel: devtmpfs: initialized Feb 9 19:01:09.094282 kernel: x86/mm: Memory block size: 128MB Feb 9 19:01:09.094296 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:01:09.094309 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:01:09.094324 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:01:09.094373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:01:09.094389 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:01:09.094404 kernel: audit: type=2000 audit(1707505267.790:1): state=initialized audit_enabled=0 res=1 Feb 9 19:01:09.094504 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:01:09.094522 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:01:09.094538 kernel: cpuidle: using governor menu Feb 9 19:01:09.094581 kernel: ACPI: bus type PCI registered Feb 9 19:01:09.094597 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:01:09.094613 kernel: dca service started, version 1.12.1 Feb 9 19:01:09.094628 kernel: PCI: Using configuration type 1 for base access Feb 9 19:01:09.094669 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:01:09.094684 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:01:09.094704 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:01:09.094719 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:01:09.094760 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:01:09.094776 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:01:09.094791 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:01:09.094806 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:01:09.094937 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:01:09.094954 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:01:09.094969 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 9 19:01:09.094986 kernel: ACPI: Interpreter enabled Feb 9 19:01:09.095027 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:01:09.095042 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:01:09.095057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:01:09.095072 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 9 19:01:09.095112 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:01:09.095591 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:01:09.096106 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 9 19:01:09.096196 kernel: acpiphp: Slot [3] registered Feb 9 19:01:09.096265 kernel: acpiphp: Slot [4] registered Feb 9 19:01:09.096284 kernel: acpiphp: Slot [5] registered Feb 9 19:01:09.096298 kernel: acpiphp: Slot [6] registered Feb 9 19:01:09.096375 kernel: acpiphp: Slot [7] registered Feb 9 19:01:09.096388 kernel: acpiphp: Slot [8] registered Feb 9 19:01:09.096462 kernel: acpiphp: Slot [9] registered Feb 9 19:01:09.096477 kernel: acpiphp: Slot [10] registered Feb 9 19:01:09.096490 kernel: acpiphp: Slot [11] registered Feb 9 19:01:09.096688 kernel: acpiphp: Slot [12] registered Feb 9 19:01:09.096761 kernel: acpiphp: Slot [13] registered Feb 9 19:01:09.096779 kernel: acpiphp: Slot [14] registered Feb 9 19:01:09.096792 kernel: acpiphp: Slot [15] registered Feb 9 19:01:09.097092 kernel: acpiphp: Slot [16] registered Feb 9 19:01:09.097137 kernel: acpiphp: Slot [17] registered Feb 9 19:01:09.097151 kernel: acpiphp: Slot [18] registered Feb 9 19:01:09.097164 kernel: acpiphp: Slot [19] registered Feb 9 19:01:09.097178 kernel: acpiphp: Slot [20] registered Feb 9 19:01:09.097485 kernel: acpiphp: Slot [21] registered Feb 9 19:01:09.097506 kernel: acpiphp: Slot [22] registered Feb 9 19:01:09.097519 kernel: acpiphp: Slot [23] registered Feb 9 19:01:09.097532 kernel: acpiphp: Slot [24] registered Feb 9 19:01:09.097599 kernel: acpiphp: Slot [25] registered Feb 9 19:01:09.097616 kernel: acpiphp: Slot [26] registered Feb 9 19:01:09.097866 kernel: acpiphp: Slot [27] registered Feb 9 19:01:09.097886 kernel: acpiphp: Slot [28] registered Feb 9 19:01:09.097900 kernel: acpiphp: Slot [29] registered Feb 9 19:01:09.097939 kernel: acpiphp: Slot [30] registered Feb 9 19:01:09.097958 kernel: acpiphp: Slot [31] registered Feb 9 19:01:09.097971 kernel: PCI host bridge to bus 0000:00 Feb 9 19:01:09.098151 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:01:09.098296 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:01:09.098445 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:01:09.098555 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 9 19:01:09.098660 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:01:09.098896 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:01:09.099036 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:01:09.099174 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 9 19:01:09.099456 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:01:09.099620 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:01:09.099747 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 9 19:01:09.099924 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 9 19:01:09.100175 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 9 19:01:09.100377 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 9 19:01:09.100502 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 9 19:01:09.100683 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 9 19:01:09.100824 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 9 19:01:09.101322 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 9 19:01:09.101546 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 9 19:01:09.101683 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:01:09.101816 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:01:09.101992 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 9 19:01:09.102139 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:01:09.102267 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 9 19:01:09.102287 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:01:09.102306 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:01:09.102322 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:01:09.102337 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:01:09.102363 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:01:09.102378 kernel: iommu: Default domain type: Translated Feb 9 19:01:09.102393 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:01:09.102519 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 9 19:01:09.102647 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:01:09.102773 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 9 19:01:09.102795 kernel: vgaarb: loaded Feb 9 19:01:09.102811 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:01:09.102826 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:01:09.102841 kernel: PTP clock support registered Feb 9 19:01:09.102856 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:01:09.102872 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:01:09.102887 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 9 19:01:09.102901 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 9 19:01:09.102919 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 9 19:01:09.102934 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 9 19:01:09.102949 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:01:09.102964 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:01:09.102979 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:01:09.102994 kernel: pnp: PnP ACPI init Feb 9 19:01:09.103009 kernel: pnp: PnP ACPI: found 5 devices Feb 9 19:01:09.103077 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:01:09.103093 kernel: NET: Registered PF_INET protocol family Feb 9 19:01:09.103112 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:01:09.103126 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 9 19:01:09.103141 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:01:09.103156 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:01:09.103171 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 9 19:01:09.103186 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 9 19:01:09.103200 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:01:09.103214 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 9 19:01:09.103230 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:01:09.103285 kernel: NET: Registered PF_XDP protocol family Feb 9 19:01:09.103438 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:01:09.103604 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:01:09.103763 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:01:09.103878 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 9 19:01:09.104012 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:01:09.104207 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:01:09.104234 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:01:09.104250 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:01:09.104267 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Feb 9 19:01:09.104282 kernel: clocksource: Switched to clocksource tsc Feb 9 19:01:09.104295 kernel: Initialise system trusted keyrings Feb 9 19:01:09.104305 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 9 19:01:09.104319 kernel: Key type asymmetric registered Feb 9 19:01:09.104331 kernel: Asymmetric key parser 'x509' registered Feb 9 19:01:09.104343 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:01:09.104369 kernel: io scheduler mq-deadline registered Feb 9 19:01:09.104381 kernel: io scheduler kyber registered Feb 9 19:01:09.104392 kernel: io scheduler bfq registered Feb 9 19:01:09.104404 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:01:09.104418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:01:09.104432 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:01:09.104445 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:01:09.104458 kernel: i8042: Warning: Keylock active Feb 9 19:01:09.104471 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:01:09.104488 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:01:09.104685 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 9 19:01:09.104803 kernel: rtc_cmos 00:00: registered as rtc0 Feb 9 19:01:09.104999 kernel: rtc_cmos 00:00: setting system clock to 2024-02-09T19:01:08 UTC (1707505268) Feb 9 19:01:09.105113 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 9 19:01:09.105129 kernel: intel_pstate: CPU model not supported Feb 9 19:01:09.105143 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:01:09.105156 kernel: Segment Routing with IPv6 Feb 9 19:01:09.105221 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:01:09.105236 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:01:09.105250 kernel: Key type dns_resolver registered Feb 9 19:01:09.105262 kernel: IPI shorthand broadcast: enabled Feb 9 19:01:09.105276 kernel: sched_clock: Marking stable (476649394, 325512749)->(972994581, -170832438) Feb 9 19:01:09.105288 kernel: registered taskstats version 1 Feb 9 19:01:09.105302 kernel: Loading compiled-in X.509 certificates Feb 9 19:01:09.105315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:01:09.105328 kernel: Key type .fscrypt registered Feb 9 19:01:09.105344 kernel: Key type fscrypt-provisioning registered Feb 9 19:01:09.105367 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:01:09.105381 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:01:09.105393 kernel: ima: No architecture policies found Feb 9 19:01:09.105405 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:01:09.105417 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:01:09.105430 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:01:09.105443 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:01:09.105456 kernel: Run /init as init process Feb 9 19:01:09.105473 kernel: with arguments: Feb 9 19:01:09.105488 kernel: /init Feb 9 19:01:09.105506 kernel: with environment: Feb 9 19:01:09.105523 kernel: HOME=/ Feb 9 19:01:09.105540 kernel: TERM=linux Feb 9 19:01:09.105555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:01:09.105572 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:01:09.105593 systemd[1]: Detected virtualization amazon. Feb 9 19:01:09.105609 systemd[1]: Detected architecture x86-64. Feb 9 19:01:09.105624 systemd[1]: Running in initrd. Feb 9 19:01:09.105638 systemd[1]: No hostname configured, using default hostname. Feb 9 19:01:09.105654 systemd[1]: Hostname set to . Feb 9 19:01:09.105687 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:01:09.105706 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:01:09.105722 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:01:09.105738 systemd[1]: Reached target cryptsetup.target. Feb 9 19:01:09.105753 systemd[1]: Reached target paths.target. Feb 9 19:01:09.105769 systemd[1]: Reached target slices.target. Feb 9 19:01:09.105784 systemd[1]: Reached target swap.target. Feb 9 19:01:09.105800 systemd[1]: Reached target timers.target. Feb 9 19:01:09.105816 systemd[1]: Listening on iscsid.socket. Feb 9 19:01:09.105884 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:01:09.105901 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:01:09.105918 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:01:09.105933 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:01:09.106055 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:01:09.106072 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:01:09.106089 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:01:09.106105 systemd[1]: Reached target sockets.target. Feb 9 19:01:09.106121 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:01:09.106140 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:01:09.106231 systemd[1]: Finished network-cleanup.service. Feb 9 19:01:09.106294 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:01:09.106312 systemd[1]: Starting systemd-journald.service... Feb 9 19:01:09.106328 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:01:09.106344 systemd[1]: Starting systemd-resolved.service... Feb 9 19:01:09.106371 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:01:09.106387 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:01:09.106403 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:01:09.106424 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:01:09.106446 systemd-journald[185]: Journal started Feb 9 19:01:09.106526 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2478105797fc2f35e712abc455b6b6) is 4.8M, max 38.7M, 33.9M free. Feb 9 19:01:09.094180 systemd-resolved[187]: Positive Trust Anchors: Feb 9 19:01:09.275622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:01:09.275736 kernel: Bridge firewalling registered Feb 9 19:01:09.275757 kernel: SCSI subsystem initialized Feb 9 19:01:09.275858 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:01:09.275882 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:01:09.275909 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:01:09.094197 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:01:09.094290 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:01:09.113693 systemd-modules-load[186]: Inserted module 'overlay' Feb 9 19:01:09.136950 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 9 19:01:09.170040 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 9 19:01:09.294229 systemd[1]: Started systemd-journald.service. Feb 9 19:01:09.294264 kernel: audit: type=1130 audit(1707505269.287:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.218013 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 9 19:01:09.301242 kernel: audit: type=1130 audit(1707505269.294:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.289068 systemd[1]: Started systemd-resolved.service. Feb 9 19:01:09.301652 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:01:09.309366 kernel: audit: type=1130 audit(1707505269.302:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.309619 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:01:09.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.312392 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:01:09.324643 kernel: audit: type=1130 audit(1707505269.311:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.324859 systemd[1]: Reached target nss-lookup.target. Feb 9 19:01:09.342603 kernel: audit: type=1130 audit(1707505269.323:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.347512 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:01:09.351710 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:01:09.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.363654 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:01:09.375055 kernel: audit: type=1130 audit(1707505269.363:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.379024 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:01:09.387364 kernel: audit: type=1130 audit(1707505269.379:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.384902 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:01:09.397852 dracut-cmdline[206]: dracut-dracut-053 Feb 9 19:01:09.401151 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:01:09.507385 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:01:09.527385 kernel: iscsi: registered transport (tcp) Feb 9 19:01:09.564150 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:01:09.564226 kernel: QLogic iSCSI HBA Driver Feb 9 19:01:09.629400 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:01:09.630968 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:01:09.641660 kernel: audit: type=1130 audit(1707505269.628:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:09.712424 kernel: raid6: avx512x4 gen() 15140 MB/s Feb 9 19:01:09.727442 kernel: raid6: avx512x4 xor() 4726 MB/s Feb 9 19:01:09.744379 kernel: raid6: avx512x2 gen() 16642 MB/s Feb 9 19:01:09.765398 kernel: raid6: avx512x2 xor() 22187 MB/s Feb 9 19:01:09.782422 kernel: raid6: avx512x1 gen() 18035 MB/s Feb 9 19:01:09.800387 kernel: raid6: avx512x1 xor() 21816 MB/s Feb 9 19:01:09.821445 kernel: raid6: avx2x4 gen() 16744 MB/s Feb 9 19:01:09.835396 kernel: raid6: avx2x4 xor() 4718 MB/s Feb 9 19:01:09.853492 kernel: raid6: avx2x2 gen() 7395 MB/s Feb 9 19:01:09.875784 kernel: raid6: avx2x2 xor() 8731 MB/s Feb 9 19:01:09.890390 kernel: raid6: avx2x1 gen() 12338 MB/s Feb 9 19:01:09.907432 kernel: raid6: avx2x1 xor() 13395 MB/s Feb 9 19:01:09.924388 kernel: raid6: sse2x4 gen() 8093 MB/s Feb 9 19:01:09.941412 kernel: raid6: sse2x4 xor() 5271 MB/s Feb 9 19:01:09.958402 kernel: raid6: sse2x2 gen() 8786 MB/s Feb 9 19:01:09.975403 kernel: raid6: sse2x2 xor() 5467 MB/s Feb 9 19:01:09.992399 kernel: raid6: sse2x1 gen() 8825 MB/s Feb 9 19:01:10.010634 kernel: raid6: sse2x1 xor() 4182 MB/s Feb 9 19:01:10.010709 kernel: raid6: using algorithm avx512x1 gen() 18035 MB/s Feb 9 19:01:10.010727 kernel: raid6: .... xor() 21816 MB/s, rmw enabled Feb 9 19:01:10.011623 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:01:10.028383 kernel: xor: automatically using best checksumming function avx Feb 9 19:01:10.144378 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:01:10.155212 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:01:10.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:10.158070 systemd[1]: Starting systemd-udevd.service... Feb 9 19:01:10.155000 audit: BPF prog-id=7 op=LOAD Feb 9 19:01:10.155000 audit: BPF prog-id=8 op=LOAD Feb 9 19:01:10.164335 kernel: audit: type=1130 audit(1707505270.154:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:10.177822 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:01:10.183873 systemd[1]: Started systemd-udevd.service. Feb 9 19:01:10.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:10.186294 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:01:10.213302 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Feb 9 19:01:10.258432 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:01:10.259637 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:01:10.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:10.317300 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:01:10.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:10.375119 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:01:10.375378 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:01:10.386375 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:01:10.401370 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 9 19:01:10.420369 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:73:ee:8b:b1:05 Feb 9 19:01:10.421743 (udev-worker)[430]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:10.435050 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:01:10.435132 kernel: AES CTR mode by8 optimization enabled Feb 9 19:01:10.460324 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:01:10.465118 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 9 19:01:10.472379 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:01:10.478370 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:01:10.478433 kernel: GPT:9289727 != 16777215 Feb 9 19:01:10.478450 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:01:10.478475 kernel: GPT:9289727 != 16777215 Feb 9 19:01:10.478492 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:01:10.478508 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:01:10.569373 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Feb 9 19:01:10.644258 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:01:10.692735 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:01:10.701523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:01:10.711939 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:01:10.714670 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:01:10.719181 systemd[1]: Starting disk-uuid.service... Feb 9 19:01:10.732154 disk-uuid[593]: Primary Header is updated. Feb 9 19:01:10.732154 disk-uuid[593]: Secondary Entries is updated. Feb 9 19:01:10.732154 disk-uuid[593]: Secondary Header is updated. Feb 9 19:01:10.741580 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:01:10.748887 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:01:10.756375 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:01:11.760253 disk-uuid[594]: The operation has completed successfully. Feb 9 19:01:11.763506 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:01:11.885911 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:01:11.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:11.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:11.886023 systemd[1]: Finished disk-uuid.service. Feb 9 19:01:11.896327 systemd[1]: Starting verity-setup.service... Feb 9 19:01:11.916373 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:01:12.021491 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:01:12.024340 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:01:12.028452 systemd[1]: Finished verity-setup.service. Feb 9 19:01:12.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.133407 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:01:12.133555 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:01:12.135454 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:01:12.137964 systemd[1]: Starting ignition-setup.service... Feb 9 19:01:12.150276 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:01:12.177284 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:01:12.177375 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:01:12.177396 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:01:12.188381 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:01:12.203446 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:01:12.245587 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:01:12.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.248000 audit: BPF prog-id=9 op=LOAD Feb 9 19:01:12.250090 systemd[1]: Starting systemd-networkd.service... Feb 9 19:01:12.261650 systemd[1]: Finished ignition-setup.service. Feb 9 19:01:12.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.265545 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:01:12.291093 systemd-networkd[1103]: lo: Link UP Feb 9 19:01:12.291519 systemd-networkd[1103]: lo: Gained carrier Feb 9 19:01:12.292166 systemd-networkd[1103]: Enumeration completed Feb 9 19:01:12.292446 systemd-networkd[1103]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:01:12.295477 systemd[1]: Started systemd-networkd.service. Feb 9 19:01:12.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.297837 systemd-networkd[1103]: eth0: Link UP Feb 9 19:01:12.297843 systemd-networkd[1103]: eth0: Gained carrier Feb 9 19:01:12.299073 systemd[1]: Reached target network.target. Feb 9 19:01:12.303905 systemd[1]: Starting iscsiuio.service... Feb 9 19:01:12.315511 systemd[1]: Started iscsiuio.service. Feb 9 19:01:12.317850 systemd-networkd[1103]: eth0: DHCPv4 address 172.31.31.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:01:12.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.321032 systemd[1]: Starting iscsid.service... Feb 9 19:01:12.327500 iscsid[1110]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:01:12.327500 iscsid[1110]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:01:12.327500 iscsid[1110]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:01:12.327500 iscsid[1110]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:01:12.327500 iscsid[1110]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:01:12.327500 iscsid[1110]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:01:12.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.338619 systemd[1]: Started iscsid.service. Feb 9 19:01:12.344485 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:01:12.369785 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:01:12.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.370139 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:01:12.372791 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:01:12.374253 systemd[1]: Reached target remote-fs.target. Feb 9 19:01:12.383323 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:01:12.407836 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:01:12.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.950972 ignition[1105]: Ignition 2.14.0 Feb 9 19:01:12.950990 ignition[1105]: Stage: fetch-offline Feb 9 19:01:12.951238 ignition[1105]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:12.951286 ignition[1105]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:12.975067 ignition[1105]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:12.975568 ignition[1105]: Ignition finished successfully Feb 9 19:01:12.978423 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:01:12.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:12.980886 systemd[1]: Starting ignition-fetch.service... Feb 9 19:01:12.999534 ignition[1129]: Ignition 2.14.0 Feb 9 19:01:13.000653 ignition[1129]: Stage: fetch Feb 9 19:01:13.001599 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:13.001630 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:13.011461 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:13.012777 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.052157 ignition[1129]: INFO : PUT result: OK Feb 9 19:01:13.056139 ignition[1129]: DEBUG : parsed url from cmdline: "" Feb 9 19:01:13.056139 ignition[1129]: INFO : no config URL provided Feb 9 19:01:13.056139 ignition[1129]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:01:13.056139 ignition[1129]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:01:13.066750 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.066750 ignition[1129]: INFO : PUT result: OK Feb 9 19:01:13.066750 ignition[1129]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:01:13.070505 ignition[1129]: INFO : GET result: OK Feb 9 19:01:13.071564 ignition[1129]: DEBUG : parsing config with SHA512: 04fef909ccd062c2c7a1f8827b9925e0049cc95dd4ec018311d94360cede8e8d8fb5626cf65fa840a65c2414f273411fb552c18ac246bf4a9fa23c8160e7faf4 Feb 9 19:01:13.111135 unknown[1129]: fetched base config from "system" Feb 9 19:01:13.111151 unknown[1129]: fetched base config from "system" Feb 9 19:01:13.112268 ignition[1129]: fetch: fetch complete Feb 9 19:01:13.111159 unknown[1129]: fetched user config from "aws" Feb 9 19:01:13.112276 ignition[1129]: fetch: fetch passed Feb 9 19:01:13.112325 ignition[1129]: Ignition finished successfully Feb 9 19:01:13.120784 systemd[1]: Finished ignition-fetch.service. Feb 9 19:01:13.126447 kernel: kauditd_printk_skb: 17 callbacks suppressed Feb 9 19:01:13.126471 kernel: audit: type=1130 audit(1707505273.121:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.124233 systemd[1]: Starting ignition-kargs.service... Feb 9 19:01:13.141842 ignition[1135]: Ignition 2.14.0 Feb 9 19:01:13.141852 ignition[1135]: Stage: kargs Feb 9 19:01:13.141998 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:13.142021 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:13.150891 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:13.152746 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.154960 ignition[1135]: INFO : PUT result: OK Feb 9 19:01:13.157986 ignition[1135]: kargs: kargs passed Feb 9 19:01:13.158044 ignition[1135]: Ignition finished successfully Feb 9 19:01:13.163766 systemd[1]: Finished ignition-kargs.service. Feb 9 19:01:13.165234 systemd[1]: Starting ignition-disks.service... Feb 9 19:01:13.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.172456 kernel: audit: type=1130 audit(1707505273.162:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.179581 ignition[1141]: Ignition 2.14.0 Feb 9 19:01:13.179593 ignition[1141]: Stage: disks Feb 9 19:01:13.179754 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:13.179774 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:13.189057 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:13.190974 ignition[1141]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.194001 ignition[1141]: INFO : PUT result: OK Feb 9 19:01:13.198178 ignition[1141]: disks: disks passed Feb 9 19:01:13.198236 ignition[1141]: Ignition finished successfully Feb 9 19:01:13.202747 systemd[1]: Finished ignition-disks.service. Feb 9 19:01:13.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.206102 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:01:13.214608 kernel: audit: type=1130 audit(1707505273.204:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.215444 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:01:13.221455 systemd[1]: Reached target local-fs.target. Feb 9 19:01:13.226172 systemd[1]: Reached target sysinit.target. Feb 9 19:01:13.228844 systemd[1]: Reached target basic.target. Feb 9 19:01:13.235175 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:01:13.295625 systemd-fsck[1149]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:01:13.301837 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:01:13.304654 systemd[1]: Mounting sysroot.mount... Feb 9 19:01:13.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.314367 kernel: audit: type=1130 audit(1707505273.301:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.328372 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:01:13.330646 systemd[1]: Mounted sysroot.mount. Feb 9 19:01:13.333223 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:01:13.348845 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:01:13.364430 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:01:13.364681 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:01:13.364723 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:01:13.383286 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:01:13.395964 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:01:13.407269 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:01:13.421396 initrd-setup-root[1171]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:01:13.431463 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1166) Feb 9 19:01:13.431523 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:01:13.435002 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:01:13.435092 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:01:13.441020 initrd-setup-root[1195]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:01:13.451372 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:01:13.457549 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:01:13.463411 initrd-setup-root[1205]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:01:13.473323 initrd-setup-root[1213]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:01:13.616489 systemd-networkd[1103]: eth0: Gained IPv6LL Feb 9 19:01:13.668553 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:01:13.678385 kernel: audit: type=1130 audit(1707505273.670:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.672222 systemd[1]: Starting ignition-mount.service... Feb 9 19:01:13.680005 systemd[1]: Starting sysroot-boot.service... Feb 9 19:01:13.688336 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:01:13.688481 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:01:13.719285 ignition[1232]: INFO : Ignition 2.14.0 Feb 9 19:01:13.719285 ignition[1232]: INFO : Stage: mount Feb 9 19:01:13.719285 ignition[1232]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:13.719285 ignition[1232]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:13.723913 systemd[1]: Finished sysroot-boot.service. Feb 9 19:01:13.732103 ignition[1232]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:13.739091 kernel: audit: type=1130 audit(1707505273.730:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.739289 ignition[1232]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.739289 ignition[1232]: INFO : PUT result: OK Feb 9 19:01:13.746811 ignition[1232]: INFO : mount: mount passed Feb 9 19:01:13.748105 ignition[1232]: INFO : Ignition finished successfully Feb 9 19:01:13.750178 systemd[1]: Finished ignition-mount.service. Feb 9 19:01:13.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.753246 systemd[1]: Starting ignition-files.service... Feb 9 19:01:13.758289 kernel: audit: type=1130 audit(1707505273.750:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:13.762523 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:01:13.777377 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1241) Feb 9 19:01:13.781371 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:01:13.781433 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:01:13.781451 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:01:13.789370 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:01:13.792228 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:01:13.807742 ignition[1260]: INFO : Ignition 2.14.0 Feb 9 19:01:13.807742 ignition[1260]: INFO : Stage: files Feb 9 19:01:13.807742 ignition[1260]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:13.807742 ignition[1260]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:13.822756 ignition[1260]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:13.825448 ignition[1260]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:13.829188 ignition[1260]: INFO : PUT result: OK Feb 9 19:01:13.833467 ignition[1260]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:01:13.838600 ignition[1260]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:01:13.838600 ignition[1260]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:01:13.871802 ignition[1260]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:01:13.874133 ignition[1260]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:01:13.876936 unknown[1260]: wrote ssh authorized keys file for user: core Feb 9 19:01:13.878855 ignition[1260]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:01:13.881270 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:01:13.884604 ignition[1260]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:01:14.364185 ignition[1260]: INFO : GET result: OK Feb 9 19:01:14.628572 ignition[1260]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:01:14.631472 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:01:14.631472 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:01:14.631472 ignition[1260]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:01:14.675858 ignition[1260]: INFO : GET result: OK Feb 9 19:01:14.796271 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:01:14.799191 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:01:14.799191 ignition[1260]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:01:15.187860 ignition[1260]: INFO : GET result: OK Feb 9 19:01:15.338550 ignition[1260]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:01:15.342812 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:01:15.342812 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:01:15.347750 ignition[1260]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:01:15.462693 ignition[1260]: INFO : GET result: OK Feb 9 19:01:15.765614 ignition[1260]: DEBUG : file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:01:15.765614 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:01:15.773162 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:15.773162 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:01:15.773162 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:01:15.773162 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:15.790428 ignition[1260]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1196488186" Feb 9 19:01:15.794492 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1263) Feb 9 19:01:15.794540 ignition[1260]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1196488186": device or resource busy Feb 9 19:01:15.794540 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1196488186", trying btrfs: device or resource busy Feb 9 19:01:15.794540 ignition[1260]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1196488186" Feb 9 19:01:15.803272 ignition[1260]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1196488186" Feb 9 19:01:15.816387 ignition[1260]: INFO : op(3): [started] unmounting "/mnt/oem1196488186" Feb 9 19:01:15.818013 ignition[1260]: INFO : op(3): [finished] unmounting "/mnt/oem1196488186" Feb 9 19:01:15.818013 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:01:15.818013 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:15.818013 ignition[1260]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:01:15.839957 systemd[1]: mnt-oem1196488186.mount: Deactivated successfully. Feb 9 19:01:15.883797 ignition[1260]: INFO : GET result: OK Feb 9 19:01:16.135411 ignition[1260]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:01:16.142428 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:16.142428 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:16.142428 ignition[1260]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:01:16.209071 ignition[1260]: INFO : GET result: OK Feb 9 19:01:16.886833 ignition[1260]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:01:16.890231 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:16.894660 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:16.896784 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:01:16.896784 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:16.901521 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:01:16.901521 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:16.906475 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:01:16.906475 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:01:16.906475 ignition[1260]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:01:17.307229 ignition[1260]: INFO : GET result: OK Feb 9 19:01:17.466055 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:01:17.468985 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:17.471231 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:01:17.471231 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:17.477240 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:01:17.479736 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:17.482197 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:17.490744 ignition[1260]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1378275666" Feb 9 19:01:17.490744 ignition[1260]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1378275666": device or resource busy Feb 9 19:01:17.490744 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1378275666", trying btrfs: device or resource busy Feb 9 19:01:17.490744 ignition[1260]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1378275666" Feb 9 19:01:17.503024 ignition[1260]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1378275666" Feb 9 19:01:17.503024 ignition[1260]: INFO : op(6): [started] unmounting "/mnt/oem1378275666" Feb 9 19:01:17.503024 ignition[1260]: INFO : op(6): [finished] unmounting "/mnt/oem1378275666" Feb 9 19:01:17.503024 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:01:17.503024 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:01:17.503024 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:17.495861 systemd[1]: mnt-oem1378275666.mount: Deactivated successfully. Feb 9 19:01:17.522605 ignition[1260]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem559682490" Feb 9 19:01:17.524322 ignition[1260]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem559682490": device or resource busy Feb 9 19:01:17.524322 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem559682490", trying btrfs: device or resource busy Feb 9 19:01:17.524322 ignition[1260]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem559682490" Feb 9 19:01:17.534948 ignition[1260]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem559682490" Feb 9 19:01:17.536936 ignition[1260]: INFO : op(9): [started] unmounting "/mnt/oem559682490" Feb 9 19:01:17.537291 systemd[1]: mnt-oem559682490.mount: Deactivated successfully. Feb 9 19:01:17.540220 ignition[1260]: INFO : op(9): [finished] unmounting "/mnt/oem559682490" Feb 9 19:01:17.540220 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:01:17.540220 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:01:17.540220 ignition[1260]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:01:17.571252 ignition[1260]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3311046930" Feb 9 19:01:17.571252 ignition[1260]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3311046930": device or resource busy Feb 9 19:01:17.571252 ignition[1260]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3311046930", trying btrfs: device or resource busy Feb 9 19:01:17.571252 ignition[1260]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3311046930" Feb 9 19:01:17.583411 ignition[1260]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3311046930" Feb 9 19:01:17.585721 ignition[1260]: INFO : op(c): [started] unmounting "/mnt/oem3311046930" Feb 9 19:01:17.589451 ignition[1260]: INFO : op(c): [finished] unmounting "/mnt/oem3311046930" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(18): [started] processing unit "prepare-helm.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(18): op(19): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(18): op(19): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(18): [finished] processing unit "prepare-helm.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:01:17.596944 ignition[1260]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:17.667216 kernel: audit: type=1130 audit(1707505277.644:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1e): [started] setting preset to enabled for "nvidia.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1e): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(1f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(20): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(20): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:01:17.667382 ignition[1260]: INFO : files: files passed Feb 9 19:01:17.667382 ignition[1260]: INFO : Ignition finished successfully Feb 9 19:01:17.635210 systemd[1]: Finished ignition-files.service. Feb 9 19:01:17.661407 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:01:17.720298 initrd-setup-root-after-ignition[1283]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:01:17.678153 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:01:17.736143 kernel: audit: type=1130 audit(1707505277.726:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.696578 systemd[1]: Starting ignition-quench.service... Feb 9 19:01:17.743418 kernel: audit: type=1130 audit(1707505277.736:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.720234 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:01:17.736440 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:01:17.736531 systemd[1]: Finished ignition-quench.service. Feb 9 19:01:17.743491 systemd[1]: Reached target ignition-complete.target. Feb 9 19:01:17.749780 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:01:17.771398 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:01:17.771508 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:01:17.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.781942 systemd[1]: Reached target initrd-fs.target. Feb 9 19:01:17.784004 systemd[1]: Reached target initrd.target. Feb 9 19:01:17.787670 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:01:17.792054 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:01:17.810674 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:01:17.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.812741 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:01:17.824149 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:01:17.827614 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:01:17.827812 systemd[1]: Stopped target timers.target. Feb 9 19:01:17.832648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:01:17.834397 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:01:17.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.836877 systemd[1]: Stopped target initrd.target. Feb 9 19:01:17.838598 systemd[1]: Stopped target basic.target. Feb 9 19:01:17.847848 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:01:17.855557 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:01:17.860534 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:01:17.866651 systemd[1]: Stopped target remote-fs.target. Feb 9 19:01:17.872704 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:01:17.876677 systemd[1]: Stopped target sysinit.target. Feb 9 19:01:17.878820 systemd[1]: Stopped target local-fs.target. Feb 9 19:01:17.884825 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:01:17.887589 systemd[1]: Stopped target swap.target. Feb 9 19:01:17.890032 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:01:17.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.890209 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:01:17.892153 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:01:17.896412 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:01:17.897804 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:01:17.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.900281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:01:17.902268 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:01:17.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.905490 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:01:17.906894 systemd[1]: Stopped ignition-files.service. Feb 9 19:01:17.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.909820 systemd[1]: Stopping ignition-mount.service... Feb 9 19:01:17.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.912830 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:01:17.915906 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:01:17.916128 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:01:17.926096 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:01:17.936996 ignition[1298]: INFO : Ignition 2.14.0 Feb 9 19:01:17.936996 ignition[1298]: INFO : Stage: umount Feb 9 19:01:17.936996 ignition[1298]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:01:17.936996 ignition[1298]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:01:17.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.926706 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:01:17.947787 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:01:17.947905 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:01:17.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.959101 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:01:17.961032 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:01:17.968310 ignition[1298]: INFO : PUT result: OK Feb 9 19:01:17.973527 ignition[1298]: INFO : umount: umount passed Feb 9 19:01:17.974689 ignition[1298]: INFO : Ignition finished successfully Feb 9 19:01:17.977755 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:01:17.977875 systemd[1]: Stopped ignition-mount.service. Feb 9 19:01:17.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.983251 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:01:17.985033 systemd[1]: Stopped ignition-disks.service. Feb 9 19:01:17.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.987677 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:01:17.987735 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:01:17.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.990233 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:01:17.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.990274 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:01:17.992292 systemd[1]: Stopped target network.target. Feb 9 19:01:17.993259 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:01:17.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:17.994059 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:01:17.995936 systemd[1]: Stopped target paths.target. Feb 9 19:01:17.997913 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:01:18.001554 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:01:18.003041 systemd[1]: Stopped target slices.target. Feb 9 19:01:18.005700 systemd[1]: Stopped target sockets.target. Feb 9 19:01:18.007781 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:01:18.008925 systemd[1]: Closed iscsid.socket. Feb 9 19:01:18.010671 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:01:18.010710 systemd[1]: Closed iscsiuio.socket. Feb 9 19:01:18.014404 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:01:18.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.018240 systemd[1]: Stopped ignition-setup.service. Feb 9 19:01:18.020662 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:01:18.022905 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:01:18.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.027695 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:01:18.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.027806 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:01:18.028632 systemd-networkd[1103]: eth0: DHCPv6 lease lost Feb 9 19:01:18.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.030788 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:01:18.040000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:01:18.040000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:01:18.030883 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:01:18.038160 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:01:18.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.038276 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:01:18.042345 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:01:18.042401 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:01:18.046218 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:01:18.046297 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:01:18.061832 systemd[1]: Stopping network-cleanup.service... Feb 9 19:01:18.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.062926 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:01:18.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.062991 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:01:18.065293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:01:18.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.065344 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:01:18.068455 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:01:18.069932 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:01:18.074794 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:01:18.082883 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:01:18.084207 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:01:18.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.086997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:01:18.087071 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:01:18.088258 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:01:18.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.088313 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:01:18.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.090720 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:01:18.090784 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:01:18.092686 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:01:18.092740 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:01:18.096592 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:01:18.096638 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:01:18.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.101616 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:01:18.110534 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:01:18.110606 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:01:18.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.114238 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:01:18.116178 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:01:18.121565 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:01:18.124266 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:01:18.132321 kernel: kauditd_printk_skb: 34 callbacks suppressed Feb 9 19:01:18.132402 kernel: audit: type=1131 audit(1707505278.125:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.127820 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:01:18.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.127917 systemd[1]: Stopped network-cleanup.service. Feb 9 19:01:18.145459 kernel: audit: type=1131 audit(1707505278.134:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.136215 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:01:18.155968 kernel: audit: type=1130 audit(1707505278.143:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.156004 kernel: audit: type=1131 audit(1707505278.144:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:18.136341 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:01:18.151585 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:01:18.162201 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:01:18.175641 systemd[1]: Switching root. Feb 9 19:01:18.209044 iscsid[1110]: iscsid shutting down. Feb 9 19:01:18.209981 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Feb 9 19:01:18.210049 systemd-journald[185]: Journal stopped Feb 9 19:01:24.090856 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:01:24.090936 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:01:24.090956 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:01:24.090984 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:01:24.091002 kernel: SELinux: policy capability open_perms=1 Feb 9 19:01:24.091019 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:01:24.091042 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:01:24.091059 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:01:24.091081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:01:24.091099 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:01:24.091117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:01:24.091137 kernel: audit: type=1403 audit(1707505278.944:76): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:01:24.091157 systemd[1]: Successfully loaded SELinux policy in 106.186ms. Feb 9 19:01:24.091193 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.490ms. Feb 9 19:01:24.091213 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:01:24.091233 systemd[1]: Detected virtualization amazon. Feb 9 19:01:24.091253 systemd[1]: Detected architecture x86-64. Feb 9 19:01:24.091271 systemd[1]: Detected first boot. Feb 9 19:01:24.091290 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:01:24.091311 kernel: audit: type=1400 audit(1707505279.151:77): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:24.091329 kernel: audit: type=1400 audit(1707505279.151:78): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:24.091346 kernel: audit: type=1334 audit(1707505279.156:79): prog-id=10 op=LOAD Feb 9 19:01:24.091390 kernel: audit: type=1334 audit(1707505279.156:80): prog-id=10 op=UNLOAD Feb 9 19:01:24.091406 kernel: audit: type=1334 audit(1707505279.160:81): prog-id=11 op=LOAD Feb 9 19:01:24.091424 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:01:24.091442 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:01:24.091463 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:01:24.091489 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:01:24.091513 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:01:24.091531 kernel: kauditd_printk_skb: 10 callbacks suppressed Feb 9 19:01:24.091548 kernel: audit: type=1334 audit(1707505283.743:85): prog-id=12 op=LOAD Feb 9 19:01:24.091564 kernel: audit: type=1334 audit(1707505283.743:86): prog-id=3 op=UNLOAD Feb 9 19:01:24.091581 kernel: audit: type=1334 audit(1707505283.744:87): prog-id=13 op=LOAD Feb 9 19:01:24.091599 kernel: audit: type=1334 audit(1707505283.745:88): prog-id=14 op=LOAD Feb 9 19:01:24.091738 kernel: audit: type=1334 audit(1707505283.745:89): prog-id=4 op=UNLOAD Feb 9 19:01:24.091762 kernel: audit: type=1334 audit(1707505283.745:90): prog-id=5 op=UNLOAD Feb 9 19:01:24.091780 kernel: audit: type=1334 audit(1707505283.749:91): prog-id=15 op=LOAD Feb 9 19:01:24.091800 kernel: audit: type=1334 audit(1707505283.749:92): prog-id=12 op=UNLOAD Feb 9 19:01:24.091817 kernel: audit: type=1334 audit(1707505283.751:93): prog-id=16 op=LOAD Feb 9 19:01:24.091834 kernel: audit: type=1334 audit(1707505283.754:94): prog-id=17 op=LOAD Feb 9 19:01:24.091853 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:01:24.091872 systemd[1]: Stopped iscsiuio.service. Feb 9 19:01:24.091894 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:01:24.091914 systemd[1]: Stopped iscsid.service. Feb 9 19:01:24.091932 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:01:24.091954 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:01:24.091973 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:01:24.091992 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:01:24.092010 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:01:24.092029 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:01:24.092048 systemd[1]: Created slice system-getty.slice. Feb 9 19:01:24.092069 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:01:24.092096 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:01:24.092117 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:01:24.092135 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:01:24.092156 systemd[1]: Created slice user.slice. Feb 9 19:01:24.092176 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:01:24.092194 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:01:24.092214 systemd[1]: Set up automount boot.automount. Feb 9 19:01:24.092233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:01:24.092252 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:01:24.092271 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:01:24.092290 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:01:24.092309 systemd[1]: Reached target integritysetup.target. Feb 9 19:01:24.092328 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:01:24.092783 systemd[1]: Reached target remote-fs.target. Feb 9 19:01:24.092806 systemd[1]: Reached target slices.target. Feb 9 19:01:24.092825 systemd[1]: Reached target swap.target. Feb 9 19:01:24.092844 systemd[1]: Reached target torcx.target. Feb 9 19:01:24.092861 systemd[1]: Reached target veritysetup.target. Feb 9 19:01:24.092879 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:01:24.092898 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:01:24.092917 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:01:24.092935 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:01:24.092958 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:01:24.092976 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:01:24.092995 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:01:24.093013 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:01:24.093030 systemd[1]: Mounting media.mount... Feb 9 19:01:24.093051 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:01:24.093070 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:01:24.093088 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:01:24.093107 systemd[1]: Mounting tmp.mount... Feb 9 19:01:24.093129 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:01:24.093148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:01:24.093166 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:01:24.093185 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:01:24.093203 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:01:24.093221 systemd[1]: Starting modprobe@drm.service... Feb 9 19:01:24.093239 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:01:24.093258 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:01:24.093277 systemd[1]: Starting modprobe@loop.service... Feb 9 19:01:24.093300 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:01:24.093320 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:01:24.093338 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:01:24.094215 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:01:24.094241 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:01:24.094259 systemd[1]: Stopped systemd-journald.service. Feb 9 19:01:24.094277 systemd[1]: Starting systemd-journald.service... Feb 9 19:01:24.094295 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:01:24.094319 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:01:24.094343 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:01:24.094683 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:01:24.094708 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:01:24.094729 systemd[1]: Stopped verity-setup.service. Feb 9 19:01:24.094749 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:01:24.094771 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:01:24.094792 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:01:24.094813 systemd[1]: Mounted media.mount. Feb 9 19:01:24.094835 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:01:24.094859 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:01:24.094880 systemd[1]: Mounted tmp.mount. Feb 9 19:01:24.094902 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:01:24.094924 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:01:24.094944 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:01:24.094968 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:01:24.094989 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:01:24.095010 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:01:24.095030 kernel: loop: module loaded Feb 9 19:01:24.095051 systemd[1]: Finished modprobe@drm.service. Feb 9 19:01:24.095076 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:01:24.095098 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:01:24.095119 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:01:24.095139 systemd[1]: Finished modprobe@loop.service. Feb 9 19:01:24.095160 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:01:24.095180 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:01:24.095206 systemd-journald[1406]: Journal started Feb 9 19:01:24.095280 systemd-journald[1406]: Runtime Journal (/run/log/journal/ec2478105797fc2f35e712abc455b6b6) is 4.8M, max 38.7M, 33.9M free. Feb 9 19:01:18.944000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:01:19.151000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:19.151000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:01:19.156000 audit: BPF prog-id=10 op=LOAD Feb 9 19:01:19.156000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:01:19.160000 audit: BPF prog-id=11 op=LOAD Feb 9 19:01:19.160000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:01:19.431000 audit[1331]: AVC avc: denied { associate } for pid=1331 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:01:19.431000 audit[1331]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=1314 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:19.431000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:01:19.433000 audit[1331]: AVC avc: denied { associate } for pid=1331 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:01:19.433000 audit[1331]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=1314 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:19.433000 audit: CWD cwd="/" Feb 9 19:01:19.433000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:19.433000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:19.433000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:01:23.743000 audit: BPF prog-id=12 op=LOAD Feb 9 19:01:23.743000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:01:23.744000 audit: BPF prog-id=13 op=LOAD Feb 9 19:01:23.745000 audit: BPF prog-id=14 op=LOAD Feb 9 19:01:23.745000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:01:23.745000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:01:23.749000 audit: BPF prog-id=15 op=LOAD Feb 9 19:01:23.749000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:01:23.751000 audit: BPF prog-id=16 op=LOAD Feb 9 19:01:23.754000 audit: BPF prog-id=17 op=LOAD Feb 9 19:01:23.754000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:01:23.754000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:01:23.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.762000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:01:23.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:23.985000 audit: BPF prog-id=18 op=LOAD Feb 9 19:01:23.985000 audit: BPF prog-id=19 op=LOAD Feb 9 19:01:23.985000 audit: BPF prog-id=20 op=LOAD Feb 9 19:01:23.985000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:01:23.985000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:01:24.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.083000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:01:24.083000 audit[1406]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fffd004ed50 a2=4000 a3=7fffd004edec items=0 ppid=1 pid=1406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:24.083000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:01:24.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:19.427009 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:01:23.742483 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:01:24.107599 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:01:24.107646 systemd[1]: Started systemd-journald.service. Feb 9 19:01:24.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:19.430753 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:01:23.757197 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:01:19.430780 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:01:19.430813 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:01:19.430824 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:01:19.430862 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:01:19.430876 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:01:19.431067 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:01:19.431107 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:01:19.431120 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:01:19.431760 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:01:19.431800 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:01:19.431819 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:01:19.431833 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:01:19.431849 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:01:19.431863 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:01:23.187159 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:01:23.187424 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:01:24.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.111013 systemd[1]: Reached target network-pre.target. Feb 9 19:01:23.187576 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:01:24.117859 kernel: fuse: init (API version 7.34) Feb 9 19:01:23.188052 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:01:24.113861 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:01:23.188115 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:01:24.114889 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:01:23.188185 /usr/lib/systemd/system-generators/torcx-generator[1331]: time="2024-02-09T19:01:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:01:24.118982 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:01:24.127055 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:01:24.128416 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:01:24.130449 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:01:24.131599 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:01:24.133078 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:01:24.136176 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:01:24.136487 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:01:24.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.138447 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:01:24.141237 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:01:24.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.153091 systemd-journald[1406]: Time spent on flushing to /var/log/journal/ec2478105797fc2f35e712abc455b6b6 is 52.484ms for 1235 entries. Feb 9 19:01:24.153091 systemd-journald[1406]: System Journal (/var/log/journal/ec2478105797fc2f35e712abc455b6b6) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:01:24.218090 systemd-journald[1406]: Received client request to flush runtime journal. Feb 9 19:01:24.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.147188 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:01:24.150775 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:01:24.152244 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:01:24.210019 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:01:24.219711 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:01:24.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.233454 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:01:24.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.237108 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:01:24.249656 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:01:24.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.253009 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:01:24.271062 udevadm[1446]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:01:24.458683 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:01:24.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.461618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:01:24.566018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:01:24.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.981375 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:01:24.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:24.981000 audit: BPF prog-id=21 op=LOAD Feb 9 19:01:24.982000 audit: BPF prog-id=22 op=LOAD Feb 9 19:01:24.982000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:01:24.982000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:01:24.984310 systemd[1]: Starting systemd-udevd.service... Feb 9 19:01:25.006525 systemd-udevd[1449]: Using default interface naming scheme 'v252'. Feb 9 19:01:25.070178 systemd[1]: Started systemd-udevd.service. Feb 9 19:01:25.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.071000 audit: BPF prog-id=23 op=LOAD Feb 9 19:01:25.074150 systemd[1]: Starting systemd-networkd.service... Feb 9 19:01:25.097000 audit: BPF prog-id=24 op=LOAD Feb 9 19:01:25.098000 audit: BPF prog-id=25 op=LOAD Feb 9 19:01:25.098000 audit: BPF prog-id=26 op=LOAD Feb 9 19:01:25.101066 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:01:25.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.162211 systemd[1]: Started systemd-userdbd.service. Feb 9 19:01:25.169279 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:01:25.234679 (udev-worker)[1465]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:25.292125 systemd-networkd[1455]: lo: Link UP Feb 9 19:01:25.293416 systemd-networkd[1455]: lo: Gained carrier Feb 9 19:01:25.294895 systemd-networkd[1455]: Enumeration completed Feb 9 19:01:25.295288 systemd[1]: Started systemd-networkd.service. Feb 9 19:01:25.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.300015 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:01:25.303041 systemd-networkd[1455]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:01:25.313378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:01:25.313522 systemd-networkd[1455]: eth0: Link UP Feb 9 19:01:25.313834 systemd-networkd[1455]: eth0: Gained carrier Feb 9 19:01:25.324530 systemd-networkd[1455]: eth0: DHCPv4 address 172.31.31.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:01:25.340000 audit[1452]: AVC avc: denied { confidentiality } for pid=1452 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:01:25.360377 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:01:25.340000 audit[1452]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5643a4788ad0 a1=32194 a2=7fbb36165bc5 a3=5 items=108 ppid=1449 pid=1452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:25.340000 audit: CWD cwd="/" Feb 9 19:01:25.340000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=1 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=2 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=3 name=(null) inode=13924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=4 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=5 name=(null) inode=13925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=6 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=7 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=8 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=9 name=(null) inode=13927 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=10 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=11 name=(null) inode=13928 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=12 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=13 name=(null) inode=13929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=14 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=15 name=(null) inode=13930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=16 name=(null) inode=13926 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=17 name=(null) inode=13931 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=18 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=19 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=20 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=21 name=(null) inode=13933 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=22 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=23 name=(null) inode=13934 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=24 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=25 name=(null) inode=13935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=26 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=27 name=(null) inode=13936 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=28 name=(null) inode=13932 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=29 name=(null) inode=13937 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=30 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=31 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=32 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=33 name=(null) inode=13939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=34 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=35 name=(null) inode=13940 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=36 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=37 name=(null) inode=13941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=38 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=39 name=(null) inode=13942 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=40 name=(null) inode=13938 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=41 name=(null) inode=13943 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=42 name=(null) inode=13923 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=43 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=44 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=45 name=(null) inode=13945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=46 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=47 name=(null) inode=13946 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=48 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=49 name=(null) inode=13947 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=50 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=51 name=(null) inode=13948 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=52 name=(null) inode=13944 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=53 name=(null) inode=13949 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=55 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=56 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=57 name=(null) inode=13951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=58 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=59 name=(null) inode=13952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=60 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=61 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=62 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=63 name=(null) inode=13954 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=64 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=65 name=(null) inode=13955 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=66 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=67 name=(null) inode=13956 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=68 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=69 name=(null) inode=13957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=70 name=(null) inode=13953 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=71 name=(null) inode=13958 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=72 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=73 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=74 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=75 name=(null) inode=13960 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=76 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=77 name=(null) inode=13961 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=78 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=79 name=(null) inode=13962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=80 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=81 name=(null) inode=13963 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=82 name=(null) inode=13959 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=83 name=(null) inode=13964 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=84 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=85 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=86 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=87 name=(null) inode=13966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=88 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=89 name=(null) inode=13967 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=90 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=91 name=(null) inode=13968 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=92 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=93 name=(null) inode=13969 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=94 name=(null) inode=13965 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=95 name=(null) inode=13970 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=96 name=(null) inode=13950 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=97 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=98 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=99 name=(null) inode=13972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=100 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=101 name=(null) inode=13973 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=102 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=103 name=(null) inode=13974 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=104 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=105 name=(null) inode=13975 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=106 name=(null) inode=13971 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PATH item=107 name=(null) inode=13976 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:01:25.340000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:01:25.370376 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:01:25.379454 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 9 19:01:25.386181 kernel: ACPI: button: Sleep Button [SLPF] Feb 9 19:01:25.410412 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 9 19:01:25.419377 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1459) Feb 9 19:01:25.423372 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Feb 9 19:01:25.435416 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:01:25.601855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:01:25.712142 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:01:25.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.715138 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:01:25.795925 lvm[1563]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:01:25.827521 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:01:25.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.829305 systemd[1]: Reached target cryptsetup.target. Feb 9 19:01:25.834966 systemd[1]: Starting lvm2-activation.service... Feb 9 19:01:25.844836 lvm[1564]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:01:25.889079 systemd[1]: Finished lvm2-activation.service. Feb 9 19:01:25.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.891593 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:01:25.893092 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:01:25.893128 systemd[1]: Reached target local-fs.target. Feb 9 19:01:25.894161 systemd[1]: Reached target machines.target. Feb 9 19:01:25.896685 systemd[1]: Starting ldconfig.service... Feb 9 19:01:25.898219 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:01:25.898304 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:01:25.899555 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:01:25.902228 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:01:25.905060 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:01:25.906373 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:01:25.906469 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:01:25.907897 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:01:25.927124 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1566 (bootctl) Feb 9 19:01:25.929733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:01:25.942456 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:01:25.945768 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:01:25.950116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:01:25.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:25.960233 systemd-tmpfiles[1569]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:01:26.158400 systemd-fsck[1575]: fsck.fat 4.2 (2021-01-31) Feb 9 19:01:26.158400 systemd-fsck[1575]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 9 19:01:26.161504 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:01:26.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.164971 systemd[1]: Mounting boot.mount... Feb 9 19:01:26.186443 systemd[1]: Mounted boot.mount. Feb 9 19:01:26.210635 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:01:26.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.284571 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:01:26.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.294000 audit: BPF prog-id=27 op=LOAD Feb 9 19:01:26.298000 audit: BPF prog-id=28 op=LOAD Feb 9 19:01:26.287154 systemd[1]: Starting audit-rules.service... Feb 9 19:01:26.289867 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:01:26.292938 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:01:26.297511 systemd[1]: Starting systemd-resolved.service... Feb 9 19:01:26.302482 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:01:26.305037 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:01:26.323087 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:01:26.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.324584 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:01:26.331000 audit[1595]: SYSTEM_BOOT pid=1595 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.335718 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:01:26.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.361087 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:01:26.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.438099 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:01:26.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:01:26.439420 systemd[1]: Reached target time-set.target. Feb 9 19:01:26.452782 systemd-resolved[1592]: Positive Trust Anchors: Feb 9 19:01:26.453171 systemd-resolved[1592]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:01:26.453268 systemd-resolved[1592]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:01:26.465000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:01:26.465000 audit[1610]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffecbd17e30 a2=420 a3=0 items=0 ppid=1589 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:01:26.465000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:01:26.468410 systemd[1]: Finished audit-rules.service. Feb 9 19:01:26.468879 augenrules[1610]: No rules Feb 9 19:01:26.488869 systemd-resolved[1592]: Defaulting to hostname 'linux'. Feb 9 19:01:26.492179 systemd[1]: Started systemd-resolved.service. Feb 9 19:01:26.494295 systemd[1]: Reached target network.target. Feb 9 19:01:26.496084 systemd[1]: Reached target nss-lookup.target. Feb 9 19:01:26.523059 systemd-timesyncd[1593]: Contacted time server 147.182.158.78:123 (0.flatcar.pool.ntp.org). Feb 9 19:01:26.523524 systemd-timesyncd[1593]: Initial clock synchronization to Fri 2024-02-09 19:01:26.710853 UTC. Feb 9 19:01:26.928630 systemd-networkd[1455]: eth0: Gained IPv6LL Feb 9 19:01:26.932041 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:01:26.934431 systemd[1]: Reached target network-online.target. Feb 9 19:01:27.212769 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:01:27.213643 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:01:27.465292 ldconfig[1565]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:01:27.482549 systemd[1]: Finished ldconfig.service. Feb 9 19:01:27.485206 systemd[1]: Starting systemd-update-done.service... Feb 9 19:01:27.505963 systemd[1]: Finished systemd-update-done.service. Feb 9 19:01:27.507526 systemd[1]: Reached target sysinit.target. Feb 9 19:01:27.508693 systemd[1]: Started motdgen.path. Feb 9 19:01:27.509907 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:01:27.512462 systemd[1]: Started logrotate.timer. Feb 9 19:01:27.514089 systemd[1]: Started mdadm.timer. Feb 9 19:01:27.515043 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:01:27.516494 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:01:27.516534 systemd[1]: Reached target paths.target. Feb 9 19:01:27.518565 systemd[1]: Reached target timers.target. Feb 9 19:01:27.520787 systemd[1]: Listening on dbus.socket. Feb 9 19:01:27.524615 systemd[1]: Starting docker.socket... Feb 9 19:01:27.529187 systemd[1]: Listening on sshd.socket. Feb 9 19:01:27.530423 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:01:27.530911 systemd[1]: Listening on docker.socket. Feb 9 19:01:27.532043 systemd[1]: Reached target sockets.target. Feb 9 19:01:27.533194 systemd[1]: Reached target basic.target. Feb 9 19:01:27.537966 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:01:27.538008 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:01:27.539693 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:01:27.542785 systemd[1]: Starting containerd.service... Feb 9 19:01:27.548903 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:01:27.552197 systemd[1]: Starting dbus.service... Feb 9 19:01:27.554965 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:01:27.561119 systemd[1]: Starting extend-filesystems.service... Feb 9 19:01:27.562442 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:01:27.564939 systemd[1]: Starting motdgen.service... Feb 9 19:01:27.569918 systemd[1]: Started nvidia.service. Feb 9 19:01:27.575047 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:01:27.582318 systemd[1]: Starting prepare-critools.service... Feb 9 19:01:27.588289 systemd[1]: Starting prepare-helm.service... Feb 9 19:01:27.591925 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:01:27.596760 systemd[1]: Starting sshd-keygen.service... Feb 9 19:01:27.608845 systemd[1]: Starting systemd-logind.service... Feb 9 19:01:27.610269 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:01:27.610352 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:01:27.611444 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:01:27.613938 systemd[1]: Starting update-engine.service... Feb 9 19:01:27.666459 jq[1625]: false Feb 9 19:01:27.619353 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:01:27.639805 systemd[1]: Created slice system-sshd.slice. Feb 9 19:01:27.672824 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:01:27.673134 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:01:27.693540 jq[1636]: true Feb 9 19:01:27.700963 tar[1638]: crictl Feb 9 19:01:27.724724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:01:27.724963 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:01:27.751390 tar[1643]: ./ Feb 9 19:01:27.751390 tar[1643]: ./macvlan Feb 9 19:01:27.770409 tar[1640]: linux-amd64/helm Feb 9 19:01:27.824442 extend-filesystems[1626]: Found nvme0n1 Feb 9 19:01:27.840651 dbus-daemon[1624]: [system] SELinux support is enabled Feb 9 19:01:27.840856 systemd[1]: Started dbus.service. Feb 9 19:01:27.845542 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:01:27.845577 systemd[1]: Reached target system-config.target. Feb 9 19:01:27.847164 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:01:27.847190 systemd[1]: Reached target user-config.target. Feb 9 19:01:27.849945 dbus-daemon[1624]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1455 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:01:27.850291 jq[1649]: true Feb 9 19:01:27.850882 extend-filesystems[1626]: Found nvme0n1p1 Feb 9 19:01:27.852526 extend-filesystems[1626]: Found nvme0n1p2 Feb 9 19:01:27.869594 extend-filesystems[1626]: Found nvme0n1p3 Feb 9 19:01:27.882994 extend-filesystems[1626]: Found usr Feb 9 19:01:27.883002 dbus-daemon[1624]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:01:27.889396 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:01:27.896921 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:01:27.897208 systemd[1]: Finished motdgen.service. Feb 9 19:01:27.904903 extend-filesystems[1626]: Found nvme0n1p4 Feb 9 19:01:27.907212 extend-filesystems[1626]: Found nvme0n1p6 Feb 9 19:01:27.907212 extend-filesystems[1626]: Found nvme0n1p7 Feb 9 19:01:27.907212 extend-filesystems[1626]: Found nvme0n1p9 Feb 9 19:01:27.907212 extend-filesystems[1626]: Checking size of /dev/nvme0n1p9 Feb 9 19:01:28.017508 update_engine[1635]: I0209 19:01:28.016887 1635 main.cc:92] Flatcar Update Engine starting Feb 9 19:01:28.025021 amazon-ssm-agent[1621]: 2024/02/09 19:01:28 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:01:28.030743 systemd[1]: Started update-engine.service. Feb 9 19:01:28.032067 update_engine[1635]: I0209 19:01:28.031005 1635 update_check_scheduler.cc:74] Next update check in 8m1s Feb 9 19:01:28.035452 systemd[1]: Started locksmithd.service. Feb 9 19:01:28.073830 extend-filesystems[1626]: Resized partition /dev/nvme0n1p9 Feb 9 19:01:28.075861 amazon-ssm-agent[1621]: Initializing new seelog logger Feb 9 19:01:28.079773 amazon-ssm-agent[1621]: New Seelog Logger Creation Complete Feb 9 19:01:28.082320 amazon-ssm-agent[1621]: 2024/02/09 19:01:28 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:01:28.082501 amazon-ssm-agent[1621]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:01:28.082851 amazon-ssm-agent[1621]: 2024/02/09 19:01:28 processing appconfig overrides Feb 9 19:01:28.084679 extend-filesystems[1691]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:01:28.090391 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:01:28.150394 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:01:28.193049 extend-filesystems[1691]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:01:28.193049 extend-filesystems[1691]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:01:28.193049 extend-filesystems[1691]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:01:28.191158 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:01:28.198987 extend-filesystems[1626]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:01:28.191419 systemd[1]: Finished extend-filesystems.service. Feb 9 19:01:28.219972 bash[1700]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:01:28.221105 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:01:28.230889 tar[1643]: ./static Feb 9 19:01:28.236775 env[1647]: time="2024-02-09T19:01:28.236698357Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:01:28.252656 systemd-logind[1634]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:01:28.253557 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:01:28.269645 systemd-logind[1634]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 9 19:01:28.269936 systemd-logind[1634]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:01:28.270239 systemd-logind[1634]: New seat seat0. Feb 9 19:01:28.281021 systemd[1]: Started systemd-logind.service. Feb 9 19:01:28.465128 tar[1643]: ./vlan Feb 9 19:01:28.472489 env[1647]: time="2024-02-09T19:01:28.472423785Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:01:28.472675 env[1647]: time="2024-02-09T19:01:28.472649153Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.480832 env[1647]: time="2024-02-09T19:01:28.480774406Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:28.491478 env[1647]: time="2024-02-09T19:01:28.491431249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.493551 env[1647]: time="2024-02-09T19:01:28.493510391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:28.494137 env[1647]: time="2024-02-09T19:01:28.494110096Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.494564 env[1647]: time="2024-02-09T19:01:28.494536816Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:01:28.494713 env[1647]: time="2024-02-09T19:01:28.494653476Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.495604 env[1647]: time="2024-02-09T19:01:28.495576589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.497965 dbus-daemon[1624]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:01:28.498365 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:01:28.499689 dbus-daemon[1624]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1670 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:01:28.500446 env[1647]: time="2024-02-09T19:01:28.500414936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:28.505466 systemd[1]: Starting polkit.service... Feb 9 19:01:28.506838 env[1647]: time="2024-02-09T19:01:28.506788110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:28.507670 env[1647]: time="2024-02-09T19:01:28.507042572Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:01:28.507895 env[1647]: time="2024-02-09T19:01:28.507871545Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:01:28.508004 env[1647]: time="2024-02-09T19:01:28.507983837Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:01:28.527852 env[1647]: time="2024-02-09T19:01:28.527747918Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:01:28.528130 env[1647]: time="2024-02-09T19:01:28.528102505Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:01:28.528259 env[1647]: time="2024-02-09T19:01:28.528241878Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:01:28.528496 env[1647]: time="2024-02-09T19:01:28.528465983Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528574 env[1647]: time="2024-02-09T19:01:28.528505569Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528574 env[1647]: time="2024-02-09T19:01:28.528527768Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528574 env[1647]: time="2024-02-09T19:01:28.528548494Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528574 env[1647]: time="2024-02-09T19:01:28.528569709Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528734 env[1647]: time="2024-02-09T19:01:28.528589130Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528734 env[1647]: time="2024-02-09T19:01:28.528612009Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528734 env[1647]: time="2024-02-09T19:01:28.528631883Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.528734 env[1647]: time="2024-02-09T19:01:28.528651442Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:01:28.528876 env[1647]: time="2024-02-09T19:01:28.528802543Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:01:28.528926 env[1647]: time="2024-02-09T19:01:28.528903318Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:01:28.529434 env[1647]: time="2024-02-09T19:01:28.529405938Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:01:28.529511 env[1647]: time="2024-02-09T19:01:28.529453291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529511 env[1647]: time="2024-02-09T19:01:28.529474852Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:01:28.529593 env[1647]: time="2024-02-09T19:01:28.529554070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529593 env[1647]: time="2024-02-09T19:01:28.529576675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529681 env[1647]: time="2024-02-09T19:01:28.529658549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529723 env[1647]: time="2024-02-09T19:01:28.529679976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529723 env[1647]: time="2024-02-09T19:01:28.529700614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529974 env[1647]: time="2024-02-09T19:01:28.529721233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529974 env[1647]: time="2024-02-09T19:01:28.529739628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529974 env[1647]: time="2024-02-09T19:01:28.529758470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.529974 env[1647]: time="2024-02-09T19:01:28.529922652Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:01:28.530146 env[1647]: time="2024-02-09T19:01:28.530088573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.530146 env[1647]: time="2024-02-09T19:01:28.530110901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.530146 env[1647]: time="2024-02-09T19:01:28.530130802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.530264 env[1647]: time="2024-02-09T19:01:28.530149329Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:01:28.530264 env[1647]: time="2024-02-09T19:01:28.530171934Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:01:28.530264 env[1647]: time="2024-02-09T19:01:28.530193592Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:01:28.530264 env[1647]: time="2024-02-09T19:01:28.530219443Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:01:28.530432 env[1647]: time="2024-02-09T19:01:28.530265354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:01:28.530709 env[1647]: time="2024-02-09T19:01:28.530633029Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.530723808Z" level=info msg="Connect containerd service" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.530770210Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532138895Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532435542Z" level=info msg="Start subscribing containerd event" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532472096Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532507350Z" level=info msg="Start recovering state" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532523386Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532597957Z" level=info msg="Start event monitor" Feb 9 19:01:28.533306 env[1647]: time="2024-02-09T19:01:28.532613135Z" level=info msg="Start snapshots syncer" Feb 9 19:01:28.532679 systemd[1]: Started containerd.service. Feb 9 19:01:28.533801 env[1647]: time="2024-02-09T19:01:28.533780411Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:01:28.533969 env[1647]: time="2024-02-09T19:01:28.533951044Z" level=info msg="Start streaming server" Feb 9 19:01:28.537014 env[1647]: time="2024-02-09T19:01:28.536978866Z" level=info msg="containerd successfully booted in 0.361666s" Feb 9 19:01:28.539367 polkitd[1721]: Started polkitd version 121 Feb 9 19:01:28.574388 polkitd[1721]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:01:28.574755 polkitd[1721]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:01:28.595697 polkitd[1721]: Finished loading, compiling and executing 2 rules Feb 9 19:01:28.596532 dbus-daemon[1624]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:01:28.596728 systemd[1]: Started polkit.service. Feb 9 19:01:28.620846 polkitd[1721]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:01:28.645987 systemd-hostnamed[1670]: Hostname set to (transient) Feb 9 19:01:28.646115 systemd-resolved[1592]: System hostname changed to 'ip-172-31-31-36'. Feb 9 19:01:28.731450 tar[1643]: ./portmap Feb 9 19:01:28.889116 coreos-metadata[1623]: Feb 09 19:01:28.889 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:01:28.898143 tar[1643]: ./host-local Feb 9 19:01:28.898844 coreos-metadata[1623]: Feb 09 19:01:28.898 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:01:28.901565 coreos-metadata[1623]: Feb 09 19:01:28.901 INFO Fetch successful Feb 9 19:01:28.901656 coreos-metadata[1623]: Feb 09 19:01:28.901 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:01:28.904031 coreos-metadata[1623]: Feb 09 19:01:28.904 INFO Fetch successful Feb 9 19:01:28.907081 unknown[1623]: wrote ssh authorized keys file for user: core Feb 9 19:01:28.948749 update-ssh-keys[1778]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:01:28.948568 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:01:29.033861 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Create new startup processor Feb 9 19:01:29.034964 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:01:29.034964 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing bookkeeping folders Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO removing the completed state files Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing healthcheck folders for long running plugins Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing locations for inventory plugin Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing default location for custom inventory Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing default location for file inventory Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Initializing default location for role inventory Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Init the cloudwatchlogs publisher Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:01:29.035095 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:01:29.035570 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO OS: linux, Arch: amd64 Feb 9 19:01:29.036633 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:01:29.036741 amazon-ssm-agent[1621]: datastore file /var/lib/amazon/ssm/i-099c9a9f5c45d929a/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:01:29.044668 tar[1643]: ./vrf Feb 9 19:01:29.134626 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:01:29.202963 tar[1643]: ./bridge Feb 9 19:01:29.229722 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:01:29.324243 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:01:29.333986 tar[1643]: ./tuning Feb 9 19:01:29.418794 tar[1643]: ./firewall Feb 9 19:01:29.419235 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:01:29.514111 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [instanceID=i-099c9a9f5c45d929a] Starting association polling Feb 9 19:01:29.533819 tar[1643]: ./host-device Feb 9 19:01:29.609144 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:01:29.621013 tar[1643]: ./sbr Feb 9 19:01:29.703298 tar[1643]: ./loopback Feb 9 19:01:29.704454 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:01:29.787447 tar[1643]: ./dhcp Feb 9 19:01:29.799959 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:01:29.896296 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:01:29.920931 tar[1640]: linux-amd64/LICENSE Feb 9 19:01:29.923705 tar[1640]: linux-amd64/README.md Feb 9 19:01:29.953718 systemd[1]: Finished prepare-helm.service. Feb 9 19:01:29.992438 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:01:30.073981 tar[1643]: ./ptp Feb 9 19:01:30.088584 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:01:30.144133 systemd[1]: Finished prepare-critools.service. Feb 9 19:01:30.161209 tar[1643]: ./ipvlan Feb 9 19:01:30.184872 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:01:30.216310 tar[1643]: ./bandwidth Feb 9 19:01:30.284162 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:01:30.293958 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:01:30.381001 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-099c9a9f5c45d929a, requestId: ed294b82-4c36-40a2-974e-368f23a80de4 Feb 9 19:01:30.431762 locksmithd[1679]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:01:30.478290 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [OfflineService] Starting document processing engine... Feb 9 19:01:30.575465 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:01:30.673110 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:01:30.771281 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] listening reply. Feb 9 19:01:30.835767 sshd_keygen[1664]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:01:30.859480 systemd[1]: Finished sshd-keygen.service. Feb 9 19:01:30.862793 systemd[1]: Starting issuegen.service... Feb 9 19:01:30.865405 systemd[1]: Started sshd@0-172.31.31.36:22-139.178.68.195:33332.service. Feb 9 19:01:30.868998 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:01:30.871256 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:01:30.871595 systemd[1]: Finished issuegen.service. Feb 9 19:01:30.874249 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:01:30.882350 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:01:30.884985 systemd[1]: Started getty@tty1.service. Feb 9 19:01:30.893882 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:01:30.897267 systemd[1]: Reached target getty.target. Feb 9 19:01:30.899578 systemd[1]: Reached target multi-user.target. Feb 9 19:01:30.908211 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:01:30.920703 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:01:30.920935 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:01:30.922573 systemd[1]: Startup finished in 749ms (kernel) + 10.037s (initrd) + 12.099s (userspace) = 22.886s. Feb 9 19:01:30.966809 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:01:31.064881 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:01:31.163398 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:01:31.261910 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [OfflineService] Starting message polling Feb 9 19:01:31.280689 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 33332 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:31.283220 sshd[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:31.297232 systemd[1]: Created slice user-500.slice. Feb 9 19:01:31.299220 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:01:31.306754 systemd-logind[1634]: New session 1 of user core. Feb 9 19:01:31.316931 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:01:31.319006 systemd[1]: Starting user@500.service... Feb 9 19:01:31.324230 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:31.360572 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [OfflineService] Starting send replies to MDS Feb 9 19:01:31.458127 systemd[1842]: Queued start job for default target default.target. Feb 9 19:01:31.459482 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:01:31.459595 systemd[1842]: Reached target paths.target. Feb 9 19:01:31.459630 systemd[1842]: Reached target sockets.target. Feb 9 19:01:31.459652 systemd[1842]: Reached target timers.target. Feb 9 19:01:31.459671 systemd[1842]: Reached target basic.target. Feb 9 19:01:31.459795 systemd[1]: Started user@500.service. Feb 9 19:01:31.461578 systemd[1]: Started session-1.scope. Feb 9 19:01:31.462565 systemd[1842]: Reached target default.target. Feb 9 19:01:31.462853 systemd[1842]: Startup finished in 129ms. Feb 9 19:01:31.558643 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:01:31.617534 systemd[1]: Started sshd@1-172.31.31.36:22-139.178.68.195:33344.service. Feb 9 19:01:31.657919 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:01:31.757379 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:01:31.799638 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 33344 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:31.801105 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:31.807035 systemd-logind[1634]: New session 2 of user core. Feb 9 19:01:31.807704 systemd[1]: Started session-2.scope. Feb 9 19:01:31.857574 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-099c9a9f5c45d929a?role=subscribe&stream=input Feb 9 19:01:31.941002 sshd[1851]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:31.944434 systemd[1]: sshd@1-172.31.31.36:22-139.178.68.195:33344.service: Deactivated successfully. Feb 9 19:01:31.945360 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:01:31.946506 systemd-logind[1634]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:01:31.947981 systemd-logind[1634]: Removed session 2. Feb 9 19:01:31.957227 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-099c9a9f5c45d929a?role=subscribe&stream=input Feb 9 19:01:31.969709 systemd[1]: Started sshd@2-172.31.31.36:22-139.178.68.195:33354.service. Feb 9 19:01:32.057566 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:01:32.138072 sshd[1857]: Accepted publickey for core from 139.178.68.195 port 33354 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:32.140735 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:32.151310 systemd[1]: Started session-3.scope. Feb 9 19:01:32.152205 systemd-logind[1634]: New session 3 of user core. Feb 9 19:01:32.160690 amazon-ssm-agent[1621]: 2024-02-09 19:01:29 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:01:32.275287 sshd[1857]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:32.278790 systemd[1]: sshd@2-172.31.31.36:22-139.178.68.195:33354.service: Deactivated successfully. Feb 9 19:01:32.279923 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:01:32.280851 systemd-logind[1634]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:01:32.282188 systemd-logind[1634]: Removed session 3. Feb 9 19:01:32.303429 systemd[1]: Started sshd@3-172.31.31.36:22-139.178.68.195:33358.service. Feb 9 19:01:32.494206 sshd[1863]: Accepted publickey for core from 139.178.68.195 port 33358 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:32.496479 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:32.506503 systemd-logind[1634]: New session 4 of user core. Feb 9 19:01:32.507759 systemd[1]: Started session-4.scope. Feb 9 19:01:32.636100 sshd[1863]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:32.640537 systemd[1]: sshd@3-172.31.31.36:22-139.178.68.195:33358.service: Deactivated successfully. Feb 9 19:01:32.641343 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:01:32.642841 systemd-logind[1634]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:01:32.643744 systemd-logind[1634]: Removed session 4. Feb 9 19:01:32.664153 systemd[1]: Started sshd@4-172.31.31.36:22-139.178.68.195:33366.service. Feb 9 19:01:32.838990 sshd[1869]: Accepted publickey for core from 139.178.68.195 port 33366 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:01:32.841384 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:32.847135 systemd[1]: Started session-5.scope. Feb 9 19:01:32.847859 systemd-logind[1634]: New session 5 of user core. Feb 9 19:01:32.995762 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:01:32.996233 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:01:33.667997 systemd[1]: Starting docker.service... Feb 9 19:01:33.714898 env[1887]: time="2024-02-09T19:01:33.714855907Z" level=info msg="Starting up" Feb 9 19:01:33.716417 env[1887]: time="2024-02-09T19:01:33.716393861Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:01:33.716527 env[1887]: time="2024-02-09T19:01:33.716515502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:01:33.716589 env[1887]: time="2024-02-09T19:01:33.716576801Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:01:33.716634 env[1887]: time="2024-02-09T19:01:33.716625922Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:01:33.718717 env[1887]: time="2024-02-09T19:01:33.718685080Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:01:33.718717 env[1887]: time="2024-02-09T19:01:33.718706353Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:01:33.718831 env[1887]: time="2024-02-09T19:01:33.718721152Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:01:33.718831 env[1887]: time="2024-02-09T19:01:33.718729837Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:01:33.724803 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3181953887-merged.mount: Deactivated successfully. Feb 9 19:01:34.194061 env[1887]: time="2024-02-09T19:01:34.194012464Z" level=info msg="Loading containers: start." Feb 9 19:01:34.377414 kernel: Initializing XFRM netlink socket Feb 9 19:01:34.465243 env[1887]: time="2024-02-09T19:01:34.465082941Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:01:34.466633 (udev-worker)[1898]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:01:34.622383 systemd-networkd[1455]: docker0: Link UP Feb 9 19:01:34.637454 env[1887]: time="2024-02-09T19:01:34.637413853Z" level=info msg="Loading containers: done." Feb 9 19:01:34.649615 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1436668467-merged.mount: Deactivated successfully. Feb 9 19:01:34.662579 env[1887]: time="2024-02-09T19:01:34.662532221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:01:34.663209 env[1887]: time="2024-02-09T19:01:34.663181039Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:01:34.663340 env[1887]: time="2024-02-09T19:01:34.663314482Z" level=info msg="Daemon has completed initialization" Feb 9 19:01:34.721527 systemd[1]: Started docker.service. Feb 9 19:01:34.727168 env[1887]: time="2024-02-09T19:01:34.727115752Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:01:34.747731 systemd[1]: Reloading. Feb 9 19:01:34.871728 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:01:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:01:34.874495 /usr/lib/systemd/system-generators/torcx-generator[2025]: time="2024-02-09T19:01:34Z" level=info msg="torcx already run" Feb 9 19:01:34.966144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:01:34.966169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:01:34.987891 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:01:35.085716 systemd[1]: Started kubelet.service. Feb 9 19:01:35.170725 kubelet[2076]: E0209 19:01:35.170654 2076 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:01:35.173110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:01:35.173282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:01:35.990321 env[1647]: time="2024-02-09T19:01:35.990267807Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:01:36.664505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298429169.mount: Deactivated successfully. Feb 9 19:01:39.569510 env[1647]: time="2024-02-09T19:01:39.569449901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:39.622563 env[1647]: time="2024-02-09T19:01:39.622516712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:39.645367 env[1647]: time="2024-02-09T19:01:39.645295889Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:39.691160 env[1647]: time="2024-02-09T19:01:39.691113309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:39.692081 env[1647]: time="2024-02-09T19:01:39.692033043Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:01:39.704854 env[1647]: time="2024-02-09T19:01:39.704815042Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:01:42.670463 env[1647]: time="2024-02-09T19:01:42.670414443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:42.688031 env[1647]: time="2024-02-09T19:01:42.687975973Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:42.691776 env[1647]: time="2024-02-09T19:01:42.691732801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:42.694234 env[1647]: time="2024-02-09T19:01:42.694191643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:42.695015 env[1647]: time="2024-02-09T19:01:42.694974376Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:01:42.707346 env[1647]: time="2024-02-09T19:01:42.707305735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:01:43.225246 env[1647]: time="2024-02-09T19:01:43.225113342Z" level=error msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-scheduler:v1.26.13\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://prod-registry-k8s-io-us-west-2.s3.dualstack.us-west-2.amazonaws.com/containers/images/sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\": dial tcp: lookup prod-registry-k8s-io-us-west-2.s3.dualstack.us-west-2.amazonaws.com: no such host" Feb 9 19:01:43.239945 env[1647]: time="2024-02-09T19:01:43.239886263Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:01:44.038981 amazon-ssm-agent[1621]: 2024-02-09 19:01:44 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:01:44.607605 env[1647]: time="2024-02-09T19:01:44.607559136Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:44.610695 env[1647]: time="2024-02-09T19:01:44.610648757Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:44.613633 env[1647]: time="2024-02-09T19:01:44.613579930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:44.616240 env[1647]: time="2024-02-09T19:01:44.616194554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:44.617220 env[1647]: time="2024-02-09T19:01:44.617183513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:01:44.629026 env[1647]: time="2024-02-09T19:01:44.628979364Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:01:45.343725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:01:45.344002 systemd[1]: Stopped kubelet.service. Feb 9 19:01:45.350331 systemd[1]: Started kubelet.service. Feb 9 19:01:45.461965 kubelet[2117]: E0209 19:01:45.461910 2117 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:01:45.467577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:01:45.467704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:01:45.859421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount371174597.mount: Deactivated successfully. Feb 9 19:01:46.682581 env[1647]: time="2024-02-09T19:01:46.682528856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.687280 env[1647]: time="2024-02-09T19:01:46.687229427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.691857 env[1647]: time="2024-02-09T19:01:46.691815314Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.694489 env[1647]: time="2024-02-09T19:01:46.694448884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:46.695013 env[1647]: time="2024-02-09T19:01:46.694969782Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:01:46.714060 env[1647]: time="2024-02-09T19:01:46.714013988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:01:47.288053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569109367.mount: Deactivated successfully. Feb 9 19:01:47.312415 env[1647]: time="2024-02-09T19:01:47.312341639Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:47.316003 env[1647]: time="2024-02-09T19:01:47.315942162Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:47.319343 env[1647]: time="2024-02-09T19:01:47.319303842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:47.324345 env[1647]: time="2024-02-09T19:01:47.324297727Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:47.325404 env[1647]: time="2024-02-09T19:01:47.325342793Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:01:47.339129 env[1647]: time="2024-02-09T19:01:47.339087876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:01:48.457088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount267443568.mount: Deactivated successfully. Feb 9 19:01:53.442509 env[1647]: time="2024-02-09T19:01:53.442451610Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:53.481041 env[1647]: time="2024-02-09T19:01:53.480989379Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:53.484159 env[1647]: time="2024-02-09T19:01:53.484119019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:53.486544 env[1647]: time="2024-02-09T19:01:53.486500470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:53.487521 env[1647]: time="2024-02-09T19:01:53.487404911Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:01:53.500960 env[1647]: time="2024-02-09T19:01:53.500915765Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:01:54.120515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2348988598.mount: Deactivated successfully. Feb 9 19:01:54.260140 amazon-ssm-agent[1621]: 2024-02-09 19:01:54 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:01:55.024231 env[1647]: time="2024-02-09T19:01:55.024176723Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:55.028185 env[1647]: time="2024-02-09T19:01:55.028132805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:55.031144 env[1647]: time="2024-02-09T19:01:55.031099160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:55.037964 env[1647]: time="2024-02-09T19:01:55.037915611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:55.039077 env[1647]: time="2024-02-09T19:01:55.039030581Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:01:55.560669 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:01:55.561488 systemd[1]: Stopped kubelet.service. Feb 9 19:01:55.563404 systemd[1]: Started kubelet.service. Feb 9 19:01:55.712793 kubelet[2161]: E0209 19:01:55.712726 2161 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:01:55.715671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:01:55.715845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:01:57.823739 systemd[1]: Stopped kubelet.service. Feb 9 19:01:57.843735 systemd[1]: Reloading. Feb 9 19:01:57.920018 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-09T19:01:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:01:57.920061 /usr/lib/systemd/system-generators/torcx-generator[2227]: time="2024-02-09T19:01:57Z" level=info msg="torcx already run" Feb 9 19:01:58.038475 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:01:58.038501 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:01:58.060396 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:01:58.242914 systemd[1]: Started kubelet.service. Feb 9 19:01:58.316215 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:01:58.316215 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:01:58.316683 kubelet[2279]: I0209 19:01:58.316272 2279 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:01:58.318284 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:01:58.318284 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:01:58.678786 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:01:58.692965 kubelet[2279]: I0209 19:01:58.692938 2279 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:01:58.693210 kubelet[2279]: I0209 19:01:58.693197 2279 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:01:58.693606 kubelet[2279]: I0209 19:01:58.693591 2279 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:01:58.697968 kubelet[2279]: E0209 19:01:58.697939 2279 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.698106 kubelet[2279]: I0209 19:01:58.697999 2279 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:01:58.704906 kubelet[2279]: I0209 19:01:58.704853 2279 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:01:58.705398 kubelet[2279]: I0209 19:01:58.705369 2279 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:01:58.705520 kubelet[2279]: I0209 19:01:58.705507 2279 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:01:58.705641 kubelet[2279]: I0209 19:01:58.705544 2279 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:01:58.705641 kubelet[2279]: I0209 19:01:58.705566 2279 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:01:58.705738 kubelet[2279]: I0209 19:01:58.705700 2279 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:01:58.713599 kubelet[2279]: I0209 19:01:58.713152 2279 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:01:58.713746 kubelet[2279]: I0209 19:01:58.713614 2279 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:01:58.713746 kubelet[2279]: I0209 19:01:58.713650 2279 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:01:58.713746 kubelet[2279]: I0209 19:01:58.713670 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:01:58.715494 kubelet[2279]: W0209 19:01:58.715272 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.715494 kubelet[2279]: E0209 19:01:58.715338 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.715494 kubelet[2279]: W0209 19:01:58.715434 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.715494 kubelet[2279]: E0209 19:01:58.715485 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.715943 kubelet[2279]: I0209 19:01:58.715568 2279 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:01:58.716095 kubelet[2279]: W0209 19:01:58.716071 2279 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:01:58.717093 kubelet[2279]: I0209 19:01:58.717038 2279 server.go:1186] "Started kubelet" Feb 9 19:01:58.724212 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:01:58.724505 kubelet[2279]: I0209 19:01:58.724478 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:01:58.729443 kubelet[2279]: E0209 19:01:58.728573 2279 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-31-36.17b2470a66378990", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-31-36", UID:"ip-172-31-31-36", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-31-36"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 1, 58, 717000080, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 1, 58, 717000080, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.31.36:6443/api/v1/namespaces/default/events": dial tcp 172.31.31.36:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:01:58.731159 kubelet[2279]: I0209 19:01:58.731130 2279 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:01:58.732532 kubelet[2279]: I0209 19:01:58.732508 2279 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:01:58.733189 kubelet[2279]: I0209 19:01:58.733170 2279 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:01:58.736081 kubelet[2279]: E0209 19:01:58.734260 2279 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.736081 kubelet[2279]: E0209 19:01:58.734550 2279 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:01:58.736081 kubelet[2279]: E0209 19:01:58.734576 2279 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:01:58.736081 kubelet[2279]: I0209 19:01:58.735532 2279 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:01:58.736081 kubelet[2279]: W0209 19:01:58.735941 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.736081 kubelet[2279]: E0209 19:01:58.735994 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.776162 kubelet[2279]: I0209 19:01:58.776119 2279 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:01:58.776162 kubelet[2279]: I0209 19:01:58.776144 2279 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:01:58.776162 kubelet[2279]: I0209 19:01:58.776168 2279 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:01:58.789878 kubelet[2279]: I0209 19:01:58.789837 2279 policy_none.go:49] "None policy: Start" Feb 9 19:01:58.790858 kubelet[2279]: I0209 19:01:58.790823 2279 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:01:58.790858 kubelet[2279]: I0209 19:01:58.790853 2279 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:01:58.799674 systemd[1]: Created slice kubepods.slice. Feb 9 19:01:58.805294 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:01:58.808972 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:01:58.821382 kubelet[2279]: I0209 19:01:58.821291 2279 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:01:58.822477 kubelet[2279]: I0209 19:01:58.821927 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:01:58.824762 kubelet[2279]: E0209 19:01:58.824747 2279 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-36\" not found" Feb 9 19:01:58.835552 kubelet[2279]: I0209 19:01:58.835520 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:01:58.835919 kubelet[2279]: E0209 19:01:58.835900 2279 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Feb 9 19:01:58.850532 kubelet[2279]: I0209 19:01:58.850510 2279 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:01:58.890103 kubelet[2279]: I0209 19:01:58.890080 2279 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:01:58.890289 kubelet[2279]: I0209 19:01:58.890277 2279 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:01:58.890391 kubelet[2279]: I0209 19:01:58.890381 2279 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:01:58.890561 kubelet[2279]: E0209 19:01:58.890548 2279 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:01:58.892018 kubelet[2279]: W0209 19:01:58.891983 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.894428 kubelet[2279]: E0209 19:01:58.892106 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.935431 kubelet[2279]: E0209 19:01:58.935286 2279 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:58.990814 kubelet[2279]: I0209 19:01:58.990771 2279 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:58.992301 kubelet[2279]: I0209 19:01:58.992281 2279 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:59.005401 kubelet[2279]: I0209 19:01:58.995764 2279 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:01:59.005401 kubelet[2279]: I0209 19:01:58.996436 2279 status_manager.go:698] "Failed to get status for pod" podUID=6f91cedcdda95569bc08ae526ad771d4 pod="kube-system/kube-scheduler-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:01:59.005401 kubelet[2279]: I0209 19:01:58.997528 2279 status_manager.go:698] "Failed to get status for pod" podUID=0080cafc1879fba515f3425e81e63996 pod="kube-system/kube-apiserver-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:01:59.005401 kubelet[2279]: I0209 19:01:59.000011 2279 status_manager.go:698] "Failed to get status for pod" podUID=235ccb8a9ea76651c27bf4649dbe9e36 pod="kube-system/kube-controller-manager-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:01:59.003522 systemd[1]: Created slice kubepods-burstable-pod6f91cedcdda95569bc08ae526ad771d4.slice. Feb 9 19:01:59.027576 systemd[1]: Created slice kubepods-burstable-pod0080cafc1879fba515f3425e81e63996.slice. Feb 9 19:01:59.048938 kubelet[2279]: I0209 19:01:59.048908 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:01:59.051734 kubelet[2279]: E0209 19:01:59.051712 2279 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Feb 9 19:01:59.058039 systemd[1]: Created slice kubepods-burstable-pod235ccb8a9ea76651c27bf4649dbe9e36.slice. Feb 9 19:01:59.141434 kubelet[2279]: I0209 19:01:59.141383 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f91cedcdda95569bc08ae526ad771d4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-36\" (UID: \"6f91cedcdda95569bc08ae526ad771d4\") " pod="kube-system/kube-scheduler-ip-172-31-31-36" Feb 9 19:01:59.141434 kubelet[2279]: I0209 19:01:59.141440 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:01:59.156790 kubelet[2279]: I0209 19:01:59.141469 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-ca-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:01:59.156790 kubelet[2279]: I0209 19:01:59.141494 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:01:59.156790 kubelet[2279]: I0209 19:01:59.141528 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:01:59.156790 kubelet[2279]: I0209 19:01:59.141559 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:01:59.156790 kubelet[2279]: I0209 19:01:59.141589 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:01:59.156943 kubelet[2279]: I0209 19:01:59.141620 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:01:59.156943 kubelet[2279]: I0209 19:01:59.141668 2279 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:01:59.325967 env[1647]: time="2024-02-09T19:01:59.325851420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-36,Uid:6f91cedcdda95569bc08ae526ad771d4,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:59.336731 kubelet[2279]: E0209 19:01:59.336676 2279 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.344611 env[1647]: time="2024-02-09T19:01:59.344473883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-36,Uid:0080cafc1879fba515f3425e81e63996,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:59.366342 env[1647]: time="2024-02-09T19:01:59.366288150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-36,Uid:235ccb8a9ea76651c27bf4649dbe9e36,Namespace:kube-system,Attempt:0,}" Feb 9 19:01:59.453432 kubelet[2279]: I0209 19:01:59.453400 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:01:59.453862 kubelet[2279]: E0209 19:01:59.453831 2279 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Feb 9 19:01:59.547829 kubelet[2279]: W0209 19:01:59.547776 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.547829 kubelet[2279]: E0209 19:01:59.547833 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.594668 kubelet[2279]: W0209 19:01:59.594332 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.594668 kubelet[2279]: E0209 19:01:59.594412 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.627209 kubelet[2279]: W0209 19:01:59.627156 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.627209 kubelet[2279]: E0209 19:01:59.627217 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:01:59.845260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051745572.mount: Deactivated successfully. Feb 9 19:01:59.858070 env[1647]: time="2024-02-09T19:01:59.858018334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.859878 env[1647]: time="2024-02-09T19:01:59.859832895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.885082 env[1647]: time="2024-02-09T19:01:59.884998998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.887617 env[1647]: time="2024-02-09T19:01:59.887561519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.906409 env[1647]: time="2024-02-09T19:01:59.906300683Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.910465 env[1647]: time="2024-02-09T19:01:59.910426929Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.914175 env[1647]: time="2024-02-09T19:01:59.914128892Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.916891 env[1647]: time="2024-02-09T19:01:59.916848584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.922818 env[1647]: time="2024-02-09T19:01:59.922763172Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.926608 env[1647]: time="2024-02-09T19:01:59.926564395Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.930622 env[1647]: time="2024-02-09T19:01:59.930583801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.941369 env[1647]: time="2024-02-09T19:01:59.941314281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:00.027164 env[1647]: time="2024-02-09T19:02:00.026912715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:00.027164 env[1647]: time="2024-02-09T19:02:00.026962972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:00.027164 env[1647]: time="2024-02-09T19:02:00.026980107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:00.027626 env[1647]: time="2024-02-09T19:02:00.027543630Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7adc4866df2aaabdfcf748511e8b43fbcfcb66e7a1d1e020f535d6eef39973ff pid=2359 runtime=io.containerd.runc.v2 Feb 9 19:02:00.057481 env[1647]: time="2024-02-09T19:02:00.057380606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:00.057822 env[1647]: time="2024-02-09T19:02:00.057777169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:00.058887 env[1647]: time="2024-02-09T19:02:00.057944430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:00.061070 env[1647]: time="2024-02-09T19:02:00.060990659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c pid=2382 runtime=io.containerd.runc.v2 Feb 9 19:02:00.066958 env[1647]: time="2024-02-09T19:02:00.066819145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:00.067134 env[1647]: time="2024-02-09T19:02:00.066919209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:00.067134 env[1647]: time="2024-02-09T19:02:00.066973543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:00.067659 env[1647]: time="2024-02-09T19:02:00.067556591Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa pid=2389 runtime=io.containerd.runc.v2 Feb 9 19:02:00.100739 systemd[1]: Started cri-containerd-7adc4866df2aaabdfcf748511e8b43fbcfcb66e7a1d1e020f535d6eef39973ff.scope. Feb 9 19:02:00.114895 systemd[1]: Started cri-containerd-532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c.scope. Feb 9 19:02:00.137427 kubelet[2279]: E0209 19:02:00.137346 2279 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:00.159275 systemd[1]: Started cri-containerd-985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa.scope. Feb 9 19:02:00.258494 kubelet[2279]: I0209 19:02:00.257971 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:02:00.258494 kubelet[2279]: E0209 19:02:00.258452 2279 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Feb 9 19:02:00.281029 env[1647]: time="2024-02-09T19:02:00.280953973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-36,Uid:0080cafc1879fba515f3425e81e63996,Namespace:kube-system,Attempt:0,} returns sandbox id \"7adc4866df2aaabdfcf748511e8b43fbcfcb66e7a1d1e020f535d6eef39973ff\"" Feb 9 19:02:00.286394 env[1647]: time="2024-02-09T19:02:00.286334476Z" level=info msg="CreateContainer within sandbox \"7adc4866df2aaabdfcf748511e8b43fbcfcb66e7a1d1e020f535d6eef39973ff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:02:00.289198 kubelet[2279]: W0209 19:02:00.289156 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:00.289523 kubelet[2279]: E0209 19:02:00.289506 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:00.317820 env[1647]: time="2024-02-09T19:02:00.317762304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-36,Uid:235ccb8a9ea76651c27bf4649dbe9e36,Namespace:kube-system,Attempt:0,} returns sandbox id \"985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa\"" Feb 9 19:02:00.341647 env[1647]: time="2024-02-09T19:02:00.341521767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-36,Uid:6f91cedcdda95569bc08ae526ad771d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c\"" Feb 9 19:02:00.357610 env[1647]: time="2024-02-09T19:02:00.357119705Z" level=info msg="CreateContainer within sandbox \"532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:02:00.360260 env[1647]: time="2024-02-09T19:02:00.358504758Z" level=info msg="CreateContainer within sandbox \"985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:02:00.421098 env[1647]: time="2024-02-09T19:02:00.421044698Z" level=info msg="CreateContainer within sandbox \"7adc4866df2aaabdfcf748511e8b43fbcfcb66e7a1d1e020f535d6eef39973ff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c10dd1a2ddbd625b9cf94aa7cd74a587d525c5f44513133078cbdfb015883cd2\"" Feb 9 19:02:00.421869 env[1647]: time="2024-02-09T19:02:00.421834714Z" level=info msg="StartContainer for \"c10dd1a2ddbd625b9cf94aa7cd74a587d525c5f44513133078cbdfb015883cd2\"" Feb 9 19:02:00.431853 env[1647]: time="2024-02-09T19:02:00.431801175Z" level=info msg="CreateContainer within sandbox \"532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087\"" Feb 9 19:02:00.436375 env[1647]: time="2024-02-09T19:02:00.436313319Z" level=info msg="StartContainer for \"f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087\"" Feb 9 19:02:00.441393 env[1647]: time="2024-02-09T19:02:00.441317588Z" level=info msg="CreateContainer within sandbox \"985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db\"" Feb 9 19:02:00.442214 env[1647]: time="2024-02-09T19:02:00.442177756Z" level=info msg="StartContainer for \"95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db\"" Feb 9 19:02:00.473085 systemd[1]: Started cri-containerd-c10dd1a2ddbd625b9cf94aa7cd74a587d525c5f44513133078cbdfb015883cd2.scope. Feb 9 19:02:00.485543 systemd[1]: Started cri-containerd-f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087.scope. Feb 9 19:02:00.536460 systemd[1]: Started cri-containerd-95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db.scope. Feb 9 19:02:00.625096 env[1647]: time="2024-02-09T19:02:00.624896520Z" level=info msg="StartContainer for \"f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087\" returns successfully" Feb 9 19:02:00.627804 env[1647]: time="2024-02-09T19:02:00.627752986Z" level=info msg="StartContainer for \"c10dd1a2ddbd625b9cf94aa7cd74a587d525c5f44513133078cbdfb015883cd2\" returns successfully" Feb 9 19:02:00.664681 env[1647]: time="2024-02-09T19:02:00.664611768Z" level=info msg="StartContainer for \"95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db\" returns successfully" Feb 9 19:02:00.724344 kubelet[2279]: E0209 19:02:00.724306 2279 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:00.915855 kubelet[2279]: I0209 19:02:00.915721 2279 status_manager.go:698] "Failed to get status for pod" podUID=0080cafc1879fba515f3425e81e63996 pod="kube-system/kube-apiserver-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:02:00.921503 kubelet[2279]: I0209 19:02:00.921476 2279 status_manager.go:698] "Failed to get status for pod" podUID=235ccb8a9ea76651c27bf4649dbe9e36 pod="kube-system/kube-controller-manager-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:02:00.925524 kubelet[2279]: I0209 19:02:00.925488 2279 status_manager.go:698] "Failed to get status for pod" podUID=6f91cedcdda95569bc08ae526ad771d4 pod="kube-system/kube-scheduler-ip-172-31-31-36" err="Get \"https://172.31.31.36:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-31-36\": dial tcp 172.31.31.36:6443: connect: connection refused" Feb 9 19:02:01.528216 kubelet[2279]: W0209 19:02:01.528175 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:01.528632 kubelet[2279]: E0209 19:02:01.528618 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:01.595606 kubelet[2279]: E0209 19:02:01.595487 2279 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-31-36.17b2470a66378990", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-31-36", UID:"ip-172-31-31-36", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-31-36"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 1, 58, 717000080, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 1, 58, 717000080, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.31.36:6443/api/v1/namespaces/default/events": dial tcp 172.31.31.36:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:02:01.742455 kubelet[2279]: E0209 19:02:01.742405 2279 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:01.860152 kubelet[2279]: I0209 19:02:01.860054 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:02:01.860616 kubelet[2279]: E0209 19:02:01.860442 2279 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.36:6443/api/v1/nodes\": dial tcp 172.31.31.36:6443: connect: connection refused" node="ip-172-31-31-36" Feb 9 19:02:02.076153 kubelet[2279]: W0209 19:02:02.076106 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:02.076153 kubelet[2279]: E0209 19:02:02.076159 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-36&limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:02.098942 kubelet[2279]: W0209 19:02:02.098893 2279 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:02.099131 kubelet[2279]: E0209 19:02:02.098954 2279 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.36:6443: connect: connection refused Feb 9 19:02:05.064343 kubelet[2279]: I0209 19:02:05.064305 2279 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:02:05.577413 kubelet[2279]: E0209 19:02:05.577372 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-36\" not found" node="ip-172-31-31-36" Feb 9 19:02:05.714612 kubelet[2279]: I0209 19:02:05.714561 2279 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-31-36" Feb 9 19:02:05.741262 kubelet[2279]: E0209 19:02:05.741227 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:05.841804 kubelet[2279]: E0209 19:02:05.841682 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:05.942573 kubelet[2279]: E0209 19:02:05.942529 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.043762 kubelet[2279]: E0209 19:02:06.043711 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.144466 kubelet[2279]: E0209 19:02:06.144324 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.245061 kubelet[2279]: E0209 19:02:06.245018 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.345979 kubelet[2279]: E0209 19:02:06.345939 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.447160 kubelet[2279]: E0209 19:02:06.446959 2279 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-31-36\" not found" Feb 9 19:02:06.719882 kubelet[2279]: I0209 19:02:06.719838 2279 apiserver.go:52] "Watching apiserver" Feb 9 19:02:06.736818 kubelet[2279]: I0209 19:02:06.736756 2279 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:02:06.805828 kubelet[2279]: I0209 19:02:06.805790 2279 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:02:08.414894 systemd[1]: Reloading. Feb 9 19:02:08.567791 /usr/lib/systemd/system-generators/torcx-generator[2611]: time="2024-02-09T19:02:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:08.569621 /usr/lib/systemd/system-generators/torcx-generator[2611]: time="2024-02-09T19:02:08Z" level=info msg="torcx already run" Feb 9 19:02:08.689050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:08.689074 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:08.717485 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:08.882599 systemd[1]: Stopping kubelet.service... Feb 9 19:02:08.883393 kubelet[2279]: I0209 19:02:08.883093 2279 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:08.903849 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:02:08.904120 systemd[1]: Stopped kubelet.service. Feb 9 19:02:08.906892 systemd[1]: Started kubelet.service. Feb 9 19:02:09.021249 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:09.021249 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:09.022381 kubelet[2662]: I0209 19:02:09.021792 2662 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:02:09.024415 kubelet[2662]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:09.024415 kubelet[2662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:09.028728 kubelet[2662]: I0209 19:02:09.028691 2662 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:02:09.028728 kubelet[2662]: I0209 19:02:09.028718 2662 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:02:09.029147 kubelet[2662]: I0209 19:02:09.029124 2662 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:02:09.030431 kubelet[2662]: I0209 19:02:09.030405 2662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:02:09.031440 kubelet[2662]: I0209 19:02:09.031419 2662 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:09.035757 kubelet[2662]: I0209 19:02:09.035729 2662 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:02:09.036010 kubelet[2662]: I0209 19:02:09.035983 2662 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:02:09.036099 kubelet[2662]: I0209 19:02:09.036073 2662 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:02:09.036229 kubelet[2662]: I0209 19:02:09.036098 2662 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:02:09.036229 kubelet[2662]: I0209 19:02:09.036115 2662 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:02:09.036229 kubelet[2662]: I0209 19:02:09.036162 2662 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:09.041252 kubelet[2662]: I0209 19:02:09.041223 2662 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:02:09.041252 kubelet[2662]: I0209 19:02:09.041255 2662 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:02:09.041482 kubelet[2662]: I0209 19:02:09.041283 2662 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:02:09.041482 kubelet[2662]: I0209 19:02:09.041313 2662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:02:09.068661 sudo[2674]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:02:09.069216 sudo[2674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:02:09.070177 kubelet[2662]: I0209 19:02:09.069727 2662 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:02:09.070637 kubelet[2662]: I0209 19:02:09.070421 2662 server.go:1186] "Started kubelet" Feb 9 19:02:09.072534 kubelet[2662]: I0209 19:02:09.072216 2662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:02:09.073499 kubelet[2662]: I0209 19:02:09.072776 2662 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:02:09.075467 kubelet[2662]: I0209 19:02:09.074126 2662 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:02:09.101444 kubelet[2662]: E0209 19:02:09.101409 2662 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:02:09.101596 kubelet[2662]: E0209 19:02:09.101467 2662 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:02:09.109233 kubelet[2662]: I0209 19:02:09.109200 2662 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:02:09.109426 kubelet[2662]: I0209 19:02:09.109277 2662 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:02:09.218034 kubelet[2662]: I0209 19:02:09.218001 2662 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-36" Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225583 2662 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225606 2662 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225633 2662 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225851 2662 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225876 2662 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:02:09.231865 kubelet[2662]: I0209 19:02:09.225886 2662 policy_none.go:49] "None policy: Start" Feb 9 19:02:09.239095 kubelet[2662]: I0209 19:02:09.239062 2662 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:02:09.239248 kubelet[2662]: I0209 19:02:09.239104 2662 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:02:09.239371 kubelet[2662]: I0209 19:02:09.239344 2662 state_mem.go:75] "Updated machine memory state" Feb 9 19:02:09.249228 kubelet[2662]: I0209 19:02:09.248662 2662 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-31-36" Feb 9 19:02:09.249228 kubelet[2662]: I0209 19:02:09.248782 2662 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-31-36" Feb 9 19:02:09.274597 kubelet[2662]: I0209 19:02:09.272716 2662 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:02:09.286332 kubelet[2662]: I0209 19:02:09.286293 2662 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:02:09.286672 kubelet[2662]: I0209 19:02:09.286583 2662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:02:09.335610 kubelet[2662]: I0209 19:02:09.335583 2662 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:02:09.335829 kubelet[2662]: I0209 19:02:09.335817 2662 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:02:09.336127 kubelet[2662]: I0209 19:02:09.336114 2662 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:02:09.336318 kubelet[2662]: E0209 19:02:09.336300 2662 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:02:09.437535 kubelet[2662]: I0209 19:02:09.437498 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:09.437787 kubelet[2662]: I0209 19:02:09.437775 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:09.437891 kubelet[2662]: I0209 19:02:09.437882 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:09.513329 kubelet[2662]: I0209 19:02:09.513292 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:09.513523 kubelet[2662]: I0209 19:02:09.513370 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:09.513523 kubelet[2662]: I0209 19:02:09.513406 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-ca-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:02:09.513523 kubelet[2662]: I0209 19:02:09.513436 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:02:09.513523 kubelet[2662]: I0209 19:02:09.513466 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0080cafc1879fba515f3425e81e63996-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-36\" (UID: \"0080cafc1879fba515f3425e81e63996\") " pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:02:09.513523 kubelet[2662]: I0209 19:02:09.513495 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:09.513751 kubelet[2662]: I0209 19:02:09.513522 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:09.513751 kubelet[2662]: I0209 19:02:09.513556 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/235ccb8a9ea76651c27bf4649dbe9e36-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-36\" (UID: \"235ccb8a9ea76651c27bf4649dbe9e36\") " pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:09.513751 kubelet[2662]: I0209 19:02:09.513592 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f91cedcdda95569bc08ae526ad771d4-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-36\" (UID: \"6f91cedcdda95569bc08ae526ad771d4\") " pod="kube-system/kube-scheduler-ip-172-31-31-36" Feb 9 19:02:09.944611 sudo[2674]: pam_unix(sudo:session): session closed for user root Feb 9 19:02:10.053345 kubelet[2662]: I0209 19:02:10.053303 2662 apiserver.go:52] "Watching apiserver" Feb 9 19:02:10.110519 kubelet[2662]: I0209 19:02:10.110482 2662 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:02:10.116508 kubelet[2662]: I0209 19:02:10.116473 2662 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:02:10.356835 kubelet[2662]: E0209 19:02:10.356810 2662 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-36\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-36" Feb 9 19:02:10.458523 kubelet[2662]: E0209 19:02:10.458485 2662 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-36\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-36" Feb 9 19:02:10.651503 kubelet[2662]: E0209 19:02:10.651332 2662 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-36\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-36" Feb 9 19:02:11.450734 kubelet[2662]: I0209 19:02:11.450698 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-36" podStartSLOduration=2.449816927 pod.CreationTimestamp="2024-02-09 19:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:11.080410126 +0000 UTC m=+2.168331898" watchObservedRunningTime="2024-02-09 19:02:11.449816927 +0000 UTC m=+2.537738667" Feb 9 19:02:11.451500 kubelet[2662]: I0209 19:02:11.451479 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-36" podStartSLOduration=2.451425747 pod.CreationTimestamp="2024-02-09 19:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:11.451371479 +0000 UTC m=+2.539293222" watchObservedRunningTime="2024-02-09 19:02:11.451425747 +0000 UTC m=+2.539347492" Feb 9 19:02:11.814535 sudo[1872]: pam_unix(sudo:session): session closed for user root Feb 9 19:02:11.839000 sshd[1869]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:11.842579 systemd[1]: sshd@4-172.31.31.36:22-139.178.68.195:33366.service: Deactivated successfully. Feb 9 19:02:11.843689 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:02:11.843892 systemd[1]: session-5.scope: Consumed 4.032s CPU time. Feb 9 19:02:11.845453 systemd-logind[1634]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:02:11.847604 systemd-logind[1634]: Removed session 5. Feb 9 19:02:13.037861 update_engine[1635]: I0209 19:02:13.037806 1635 update_attempter.cc:509] Updating boot flags... Feb 9 19:02:15.373552 kubelet[2662]: I0209 19:02:15.373517 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-36" podStartSLOduration=6.373474068 pod.CreationTimestamp="2024-02-09 19:02:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:11.851806728 +0000 UTC m=+2.939728476" watchObservedRunningTime="2024-02-09 19:02:15.373474068 +0000 UTC m=+6.461395815" Feb 9 19:02:22.629161 kubelet[2662]: I0209 19:02:22.629132 2662 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:02:22.630500 env[1647]: time="2024-02-09T19:02:22.630459819Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:02:22.631250 kubelet[2662]: I0209 19:02:22.631228 2662 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:02:23.364757 kubelet[2662]: I0209 19:02:23.364725 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:23.371698 systemd[1]: Created slice kubepods-besteffort-podb7c4ea88_0b81_4059_8802_d650b48dad55.slice. Feb 9 19:02:23.375413 kubelet[2662]: W0209 19:02:23.375387 2662 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:23.375620 kubelet[2662]: E0209 19:02:23.375607 2662 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:23.375862 kubelet[2662]: W0209 19:02:23.375847 2662 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:23.375975 kubelet[2662]: E0209 19:02:23.375965 2662 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:23.395989 kubelet[2662]: I0209 19:02:23.395935 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:23.403245 systemd[1]: Created slice kubepods-burstable-podedc7c2b6_12e1_4a6d_a9cd_a691fd2c04fe.slice. Feb 9 19:02:23.425753 kubelet[2662]: I0209 19:02:23.425581 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-cgroup\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426038 kubelet[2662]: I0209 19:02:23.426022 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbslf\" (UniqueName: \"kubernetes.io/projected/b7c4ea88-0b81-4059-8802-d650b48dad55-kube-api-access-dbslf\") pod \"kube-proxy-kct7n\" (UID: \"b7c4ea88-0b81-4059-8802-d650b48dad55\") " pod="kube-system/kube-proxy-kct7n" Feb 9 19:02:23.426166 kubelet[2662]: I0209 19:02:23.426155 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-clustermesh-secrets\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426330 kubelet[2662]: I0209 19:02:23.426318 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-config-path\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426469 kubelet[2662]: I0209 19:02:23.426458 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-net\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426572 kubelet[2662]: I0209 19:02:23.426563 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-bpf-maps\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426671 kubelet[2662]: I0209 19:02:23.426662 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hostproc\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.426884 kubelet[2662]: I0209 19:02:23.426870 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7c4ea88-0b81-4059-8802-d650b48dad55-kube-proxy\") pod \"kube-proxy-kct7n\" (UID: \"b7c4ea88-0b81-4059-8802-d650b48dad55\") " pod="kube-system/kube-proxy-kct7n" Feb 9 19:02:23.427011 kubelet[2662]: I0209 19:02:23.426999 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-kernel\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427111 kubelet[2662]: I0209 19:02:23.427099 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7c4ea88-0b81-4059-8802-d650b48dad55-xtables-lock\") pod \"kube-proxy-kct7n\" (UID: \"b7c4ea88-0b81-4059-8802-d650b48dad55\") " pod="kube-system/kube-proxy-kct7n" Feb 9 19:02:23.427217 kubelet[2662]: I0209 19:02:23.427206 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-etc-cni-netd\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427321 kubelet[2662]: I0209 19:02:23.427311 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-lib-modules\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427556 kubelet[2662]: I0209 19:02:23.427544 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hubble-tls\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427684 kubelet[2662]: I0209 19:02:23.427673 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7c4ea88-0b81-4059-8802-d650b48dad55-lib-modules\") pod \"kube-proxy-kct7n\" (UID: \"b7c4ea88-0b81-4059-8802-d650b48dad55\") " pod="kube-system/kube-proxy-kct7n" Feb 9 19:02:23.427784 kubelet[2662]: I0209 19:02:23.427774 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-run\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427881 kubelet[2662]: I0209 19:02:23.427871 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-xtables-lock\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.427973 kubelet[2662]: I0209 19:02:23.427964 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cni-path\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.428073 kubelet[2662]: I0209 19:02:23.428062 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shfjf\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-kube-api-access-shfjf\") pod \"cilium-p28cf\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " pod="kube-system/cilium-p28cf" Feb 9 19:02:23.545416 kubelet[2662]: I0209 19:02:23.542919 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:23.549502 systemd[1]: Created slice kubepods-besteffort-podfb7e20d8_3f8c_4f65_a0ab_bd3e293632ec.slice. Feb 9 19:02:23.630886 kubelet[2662]: I0209 19:02:23.630776 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-nmqtv\" (UID: \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\") " pod="kube-system/cilium-operator-f59cbd8c6-nmqtv" Feb 9 19:02:23.631383 kubelet[2662]: I0209 19:02:23.631344 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvgn\" (UniqueName: \"kubernetes.io/projected/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-kube-api-access-gjvgn\") pod \"cilium-operator-f59cbd8c6-nmqtv\" (UID: \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\") " pod="kube-system/cilium-operator-f59cbd8c6-nmqtv" Feb 9 19:02:24.288540 amazon-ssm-agent[1621]: 2024-02-09 19:02:24 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:02:24.541973 kubelet[2662]: E0209 19:02:24.541847 2662 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:24.542218 kubelet[2662]: E0209 19:02:24.542117 2662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b7c4ea88-0b81-4059-8802-d650b48dad55-kube-proxy podName:b7c4ea88-0b81-4059-8802-d650b48dad55 nodeName:}" failed. No retries permitted until 2024-02-09 19:02:25.041939294 +0000 UTC m=+16.129861043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b7c4ea88-0b81-4059-8802-d650b48dad55-kube-proxy") pod "kube-proxy-kct7n" (UID: "b7c4ea88-0b81-4059-8802-d650b48dad55") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:24.610738 env[1647]: time="2024-02-09T19:02:24.610689104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p28cf,Uid:edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:24.633543 env[1647]: time="2024-02-09T19:02:24.633295353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:24.633543 env[1647]: time="2024-02-09T19:02:24.633391371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:24.633543 env[1647]: time="2024-02-09T19:02:24.633409236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:24.633996 env[1647]: time="2024-02-09T19:02:24.633914858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a pid=3032 runtime=io.containerd.runc.v2 Feb 9 19:02:24.666996 systemd[1]: Started cri-containerd-78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a.scope. Feb 9 19:02:24.694119 env[1647]: time="2024-02-09T19:02:24.694080802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p28cf,Uid:edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:02:24.698732 env[1647]: time="2024-02-09T19:02:24.698701533Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:02:24.755554 env[1647]: time="2024-02-09T19:02:24.755508433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nmqtv,Uid:fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:24.783862 env[1647]: time="2024-02-09T19:02:24.783774574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:24.783862 env[1647]: time="2024-02-09T19:02:24.783817071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:24.784180 env[1647]: time="2024-02-09T19:02:24.783834326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:24.784180 env[1647]: time="2024-02-09T19:02:24.784046207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa pid=3073 runtime=io.containerd.runc.v2 Feb 9 19:02:24.807693 systemd[1]: Started cri-containerd-56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa.scope. Feb 9 19:02:24.862417 env[1647]: time="2024-02-09T19:02:24.862343043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-nmqtv,Uid:fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:02:25.181619 env[1647]: time="2024-02-09T19:02:25.181499264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kct7n,Uid:b7c4ea88-0b81-4059-8802-d650b48dad55,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:25.210269 env[1647]: time="2024-02-09T19:02:25.210184760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:25.210504 env[1647]: time="2024-02-09T19:02:25.210239327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:25.210504 env[1647]: time="2024-02-09T19:02:25.210254791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:25.210710 env[1647]: time="2024-02-09T19:02:25.210478864Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1219a4d04f7dbf218e7a864d77bbe91f58931264ae6127cf1da5763fe70f1489 pid=3113 runtime=io.containerd.runc.v2 Feb 9 19:02:25.228790 systemd[1]: Started cri-containerd-1219a4d04f7dbf218e7a864d77bbe91f58931264ae6127cf1da5763fe70f1489.scope. Feb 9 19:02:25.276752 env[1647]: time="2024-02-09T19:02:25.276711070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kct7n,Uid:b7c4ea88-0b81-4059-8802-d650b48dad55,Namespace:kube-system,Attempt:0,} returns sandbox id \"1219a4d04f7dbf218e7a864d77bbe91f58931264ae6127cf1da5763fe70f1489\"" Feb 9 19:02:25.282231 env[1647]: time="2024-02-09T19:02:25.282180478Z" level=info msg="CreateContainer within sandbox \"1219a4d04f7dbf218e7a864d77bbe91f58931264ae6127cf1da5763fe70f1489\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:02:25.318757 env[1647]: time="2024-02-09T19:02:25.318711914Z" level=info msg="CreateContainer within sandbox \"1219a4d04f7dbf218e7a864d77bbe91f58931264ae6127cf1da5763fe70f1489\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d25cc652b4ae8dcb9f499b737a4add3ee355028b82c1504e6e3b45d8ac48e29\"" Feb 9 19:02:25.321737 env[1647]: time="2024-02-09T19:02:25.321698140Z" level=info msg="StartContainer for \"9d25cc652b4ae8dcb9f499b737a4add3ee355028b82c1504e6e3b45d8ac48e29\"" Feb 9 19:02:25.355181 systemd[1]: Started cri-containerd-9d25cc652b4ae8dcb9f499b737a4add3ee355028b82c1504e6e3b45d8ac48e29.scope. Feb 9 19:02:25.404066 env[1647]: time="2024-02-09T19:02:25.404018913Z" level=info msg="StartContainer for \"9d25cc652b4ae8dcb9f499b737a4add3ee355028b82c1504e6e3b45d8ac48e29\" returns successfully" Feb 9 19:02:26.464391 kubelet[2662]: I0209 19:02:26.463791 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kct7n" podStartSLOduration=3.463729936 pod.CreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:26.463397387 +0000 UTC m=+17.551319132" watchObservedRunningTime="2024-02-09 19:02:26.463729936 +0000 UTC m=+17.551651683" Feb 9 19:02:31.873062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2337315845.mount: Deactivated successfully. Feb 9 19:02:37.723871 env[1647]: time="2024-02-09T19:02:37.723821557Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:37.788635 env[1647]: time="2024-02-09T19:02:37.788581773Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:37.796868 env[1647]: time="2024-02-09T19:02:37.796410482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:37.797188 env[1647]: time="2024-02-09T19:02:37.797149148Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:02:37.800705 env[1647]: time="2024-02-09T19:02:37.798629517Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:02:37.800705 env[1647]: time="2024-02-09T19:02:37.800313186Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:02:38.178108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3090524603.mount: Deactivated successfully. Feb 9 19:02:38.376918 env[1647]: time="2024-02-09T19:02:38.376858316Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\"" Feb 9 19:02:38.381884 env[1647]: time="2024-02-09T19:02:38.377442055Z" level=info msg="StartContainer for \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\"" Feb 9 19:02:38.455126 systemd[1]: Started cri-containerd-d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc.scope. Feb 9 19:02:38.515714 systemd[1]: cri-containerd-d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc.scope: Deactivated successfully. Feb 9 19:02:38.557519 env[1647]: time="2024-02-09T19:02:38.557459122Z" level=info msg="StartContainer for \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\" returns successfully" Feb 9 19:02:39.126637 env[1647]: time="2024-02-09T19:02:39.126528035Z" level=info msg="shim disconnected" id=d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc Feb 9 19:02:39.126637 env[1647]: time="2024-02-09T19:02:39.126626508Z" level=warning msg="cleaning up after shim disconnected" id=d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc namespace=k8s.io Feb 9 19:02:39.126637 env[1647]: time="2024-02-09T19:02:39.126643882Z" level=info msg="cleaning up dead shim" Feb 9 19:02:39.152567 env[1647]: time="2024-02-09T19:02:39.152426843Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3340 runtime=io.containerd.runc.v2\n" Feb 9 19:02:39.174039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc-rootfs.mount: Deactivated successfully. Feb 9 19:02:39.508385 env[1647]: time="2024-02-09T19:02:39.505096482Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:02:39.549320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3942299063.mount: Deactivated successfully. Feb 9 19:02:39.561304 env[1647]: time="2024-02-09T19:02:39.561253166Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\"" Feb 9 19:02:39.563982 env[1647]: time="2024-02-09T19:02:39.562056045Z" level=info msg="StartContainer for \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\"" Feb 9 19:02:39.589525 systemd[1]: Started cri-containerd-2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61.scope. Feb 9 19:02:39.640094 env[1647]: time="2024-02-09T19:02:39.639995953Z" level=info msg="StartContainer for \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\" returns successfully" Feb 9 19:02:39.655820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:02:39.657381 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:02:39.657920 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:02:39.666279 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:39.667849 systemd[1]: cri-containerd-2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61.scope: Deactivated successfully. Feb 9 19:02:39.702241 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:39.723613 env[1647]: time="2024-02-09T19:02:39.723555803Z" level=info msg="shim disconnected" id=2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61 Feb 9 19:02:39.723613 env[1647]: time="2024-02-09T19:02:39.723611555Z" level=warning msg="cleaning up after shim disconnected" id=2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61 namespace=k8s.io Feb 9 19:02:39.725075 env[1647]: time="2024-02-09T19:02:39.723624371Z" level=info msg="cleaning up dead shim" Feb 9 19:02:39.743976 env[1647]: time="2024-02-09T19:02:39.743919991Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3405 runtime=io.containerd.runc.v2\n" Feb 9 19:02:40.184335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61-rootfs.mount: Deactivated successfully. Feb 9 19:02:40.309745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003378235.mount: Deactivated successfully. Feb 9 19:02:40.521702 env[1647]: time="2024-02-09T19:02:40.521647789Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:02:40.609076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1233507036.mount: Deactivated successfully. Feb 9 19:02:40.665919 env[1647]: time="2024-02-09T19:02:40.665857753Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\"" Feb 9 19:02:40.682217 env[1647]: time="2024-02-09T19:02:40.682165070Z" level=info msg="StartContainer for \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\"" Feb 9 19:02:40.732506 systemd[1]: Started cri-containerd-ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea.scope. Feb 9 19:02:40.834726 systemd[1]: cri-containerd-ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea.scope: Deactivated successfully. Feb 9 19:02:41.039522 env[1647]: time="2024-02-09T19:02:41.039457817Z" level=info msg="StartContainer for \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\" returns successfully" Feb 9 19:02:41.127762 env[1647]: time="2024-02-09T19:02:41.127093931Z" level=info msg="shim disconnected" id=ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea Feb 9 19:02:41.127762 env[1647]: time="2024-02-09T19:02:41.127205589Z" level=warning msg="cleaning up after shim disconnected" id=ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea namespace=k8s.io Feb 9 19:02:41.127762 env[1647]: time="2024-02-09T19:02:41.127222130Z" level=info msg="cleaning up dead shim" Feb 9 19:02:41.140000 env[1647]: time="2024-02-09T19:02:41.139938455Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3462 runtime=io.containerd.runc.v2\n" Feb 9 19:02:41.533927 env[1647]: time="2024-02-09T19:02:41.533876174Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:02:41.577514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2504474021.mount: Deactivated successfully. Feb 9 19:02:41.596970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702984130.mount: Deactivated successfully. Feb 9 19:02:41.633633 env[1647]: time="2024-02-09T19:02:41.633216052Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\"" Feb 9 19:02:41.645286 env[1647]: time="2024-02-09T19:02:41.645240273Z" level=info msg="StartContainer for \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\"" Feb 9 19:02:41.688429 systemd[1]: Started cri-containerd-1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642.scope. Feb 9 19:02:41.750369 systemd[1]: cri-containerd-1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642.scope: Deactivated successfully. Feb 9 19:02:41.754264 env[1647]: time="2024-02-09T19:02:41.754192437Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedc7c2b6_12e1_4a6d_a9cd_a691fd2c04fe.slice/cri-containerd-1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642.scope/memory.events\": no such file or directory" Feb 9 19:02:41.757969 env[1647]: time="2024-02-09T19:02:41.757920715Z" level=info msg="StartContainer for \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\" returns successfully" Feb 9 19:02:43.436894 env[1647]: time="2024-02-09T19:02:43.436827110Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:43.441540 env[1647]: time="2024-02-09T19:02:43.441468903Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:43.445883 env[1647]: time="2024-02-09T19:02:43.444709071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:43.445883 env[1647]: time="2024-02-09T19:02:43.445631960Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:02:43.449673 env[1647]: time="2024-02-09T19:02:43.449511413Z" level=info msg="CreateContainer within sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:02:43.520116 env[1647]: time="2024-02-09T19:02:43.520059326Z" level=info msg="shim disconnected" id=1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642 Feb 9 19:02:43.520116 env[1647]: time="2024-02-09T19:02:43.520117584Z" level=warning msg="cleaning up after shim disconnected" id=1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642 namespace=k8s.io Feb 9 19:02:43.520579 env[1647]: time="2024-02-09T19:02:43.520130501Z" level=info msg="cleaning up dead shim" Feb 9 19:02:43.530173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2865875260.mount: Deactivated successfully. Feb 9 19:02:43.542535 env[1647]: time="2024-02-09T19:02:43.542485251Z" level=info msg="CreateContainer within sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\"" Feb 9 19:02:43.547171 env[1647]: time="2024-02-09T19:02:43.547128876Z" level=info msg="StartContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\"" Feb 9 19:02:43.548727 env[1647]: time="2024-02-09T19:02:43.548681411Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3522 runtime=io.containerd.runc.v2\n" Feb 9 19:02:43.593557 systemd[1]: Started cri-containerd-a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14.scope. Feb 9 19:02:43.660478 env[1647]: time="2024-02-09T19:02:43.660390017Z" level=info msg="StartContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" returns successfully" Feb 9 19:02:44.534892 env[1647]: time="2024-02-09T19:02:44.534804477Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:02:44.610765 env[1647]: time="2024-02-09T19:02:44.610708148Z" level=info msg="CreateContainer within sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\"" Feb 9 19:02:44.611911 env[1647]: time="2024-02-09T19:02:44.611645780Z" level=info msg="StartContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\"" Feb 9 19:02:44.688728 systemd[1]: Started cri-containerd-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de.scope. Feb 9 19:02:44.694117 systemd[1]: run-containerd-runc-k8s.io-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de-runc.NMqgpr.mount: Deactivated successfully. Feb 9 19:02:44.760622 env[1647]: time="2024-02-09T19:02:44.760574410Z" level=info msg="StartContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" returns successfully" Feb 9 19:02:44.989595 kubelet[2662]: I0209 19:02:44.988340 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-nmqtv" podStartSLOduration=-9.223372014866627e+09 pod.CreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="2024-02-09 19:02:24.864127765 +0000 UTC m=+15.952049489" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:44.69288632 +0000 UTC m=+35.780808068" watchObservedRunningTime="2024-02-09 19:02:44.988149246 +0000 UTC m=+36.076070993" Feb 9 19:02:45.122987 kubelet[2662]: I0209 19:02:45.122960 2662 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:02:45.370249 kubelet[2662]: I0209 19:02:45.370132 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:45.379603 systemd[1]: Created slice kubepods-burstable-pod65860731_9f69_4a23_afce_1c0a77839e67.slice. Feb 9 19:02:45.387257 kubelet[2662]: I0209 19:02:45.387221 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:02:45.401629 systemd[1]: Created slice kubepods-burstable-podc5951592_3c9b_43af_ae8e_e933bee2f4a3.slice. Feb 9 19:02:45.425200 kubelet[2662]: W0209 19:02:45.425167 2662 reflector.go:424] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:45.425200 kubelet[2662]: E0209 19:02:45.425208 2662 reflector.go:140] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-31-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-31-36' and this object Feb 9 19:02:45.473283 kubelet[2662]: I0209 19:02:45.473247 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65860731-9f69-4a23-afce-1c0a77839e67-config-volume\") pod \"coredns-787d4945fb-bwx97\" (UID: \"65860731-9f69-4a23-afce-1c0a77839e67\") " pod="kube-system/coredns-787d4945fb-bwx97" Feb 9 19:02:45.474586 kubelet[2662]: I0209 19:02:45.473309 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5951592-3c9b-43af-ae8e-e933bee2f4a3-config-volume\") pod \"coredns-787d4945fb-gq7c9\" (UID: \"c5951592-3c9b-43af-ae8e-e933bee2f4a3\") " pod="kube-system/coredns-787d4945fb-gq7c9" Feb 9 19:02:45.474586 kubelet[2662]: I0209 19:02:45.473342 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdlhv\" (UniqueName: \"kubernetes.io/projected/c5951592-3c9b-43af-ae8e-e933bee2f4a3-kube-api-access-rdlhv\") pod \"coredns-787d4945fb-gq7c9\" (UID: \"c5951592-3c9b-43af-ae8e-e933bee2f4a3\") " pod="kube-system/coredns-787d4945fb-gq7c9" Feb 9 19:02:45.474586 kubelet[2662]: I0209 19:02:45.473386 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmxpb\" (UniqueName: \"kubernetes.io/projected/65860731-9f69-4a23-afce-1c0a77839e67-kube-api-access-xmxpb\") pod \"coredns-787d4945fb-bwx97\" (UID: \"65860731-9f69-4a23-afce-1c0a77839e67\") " pod="kube-system/coredns-787d4945fb-bwx97" Feb 9 19:02:45.905604 kubelet[2662]: I0209 19:02:45.905565 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-p28cf" podStartSLOduration=-9.223372013949266e+09 pod.CreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="2024-02-09 19:02:24.69627225 +0000 UTC m=+15.784193980" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:45.90340947 +0000 UTC m=+36.991331216" watchObservedRunningTime="2024-02-09 19:02:45.905509141 +0000 UTC m=+36.993430888" Feb 9 19:02:46.584886 env[1647]: time="2024-02-09T19:02:46.584818823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bwx97,Uid:65860731-9f69-4a23-afce-1c0a77839e67,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:46.755222 env[1647]: time="2024-02-09T19:02:46.754711409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gq7c9,Uid:c5951592-3c9b-43af-ae8e-e933bee2f4a3,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:48.779251 (udev-worker)[3659]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:48.781355 (udev-worker)[3721]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:48.785503 systemd-networkd[1455]: cilium_host: Link UP Feb 9 19:02:48.787032 systemd-networkd[1455]: cilium_net: Link UP Feb 9 19:02:48.787994 systemd-networkd[1455]: cilium_net: Gained carrier Feb 9 19:02:48.789954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:02:48.790054 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:02:48.790404 systemd-networkd[1455]: cilium_host: Gained carrier Feb 9 19:02:48.992133 (udev-worker)[3732]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:49.005744 systemd-networkd[1455]: cilium_vxlan: Link UP Feb 9 19:02:49.005754 systemd-networkd[1455]: cilium_vxlan: Gained carrier Feb 9 19:02:49.016572 systemd-networkd[1455]: cilium_host: Gained IPv6LL Feb 9 19:02:49.136640 systemd-networkd[1455]: cilium_net: Gained IPv6LL Feb 9 19:02:49.782431 kernel: NET: Registered PF_ALG protocol family Feb 9 19:02:50.256580 systemd-networkd[1455]: cilium_vxlan: Gained IPv6LL Feb 9 19:02:50.802322 systemd-networkd[1455]: lxc_health: Link UP Feb 9 19:02:50.818247 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:02:50.817156 systemd-networkd[1455]: lxc_health: Gained carrier Feb 9 19:02:51.406486 systemd-networkd[1455]: lxc9568d430203d: Link UP Feb 9 19:02:51.414624 (udev-worker)[3731]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:02:51.422826 kernel: eth0: renamed from tmpe1046 Feb 9 19:02:51.439437 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9568d430203d: link becomes ready Feb 9 19:02:51.437526 systemd-networkd[1455]: lxcec7b1bafa122: Link UP Feb 9 19:02:51.446601 kernel: eth0: renamed from tmp033f8 Feb 9 19:02:51.444806 systemd-networkd[1455]: lxc9568d430203d: Gained carrier Feb 9 19:02:51.456426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcec7b1bafa122: link becomes ready Feb 9 19:02:51.456691 systemd-networkd[1455]: lxcec7b1bafa122: Gained carrier Feb 9 19:02:52.487559 systemd-networkd[1455]: lxc_health: Gained IPv6LL Feb 9 19:02:53.136547 systemd-networkd[1455]: lxcec7b1bafa122: Gained IPv6LL Feb 9 19:02:53.136958 systemd-networkd[1455]: lxc9568d430203d: Gained IPv6LL Feb 9 19:02:57.869949 env[1647]: time="2024-02-09T19:02:57.869651438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:57.869949 env[1647]: time="2024-02-09T19:02:57.869781818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:57.869949 env[1647]: time="2024-02-09T19:02:57.869815042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:57.870572 env[1647]: time="2024-02-09T19:02:57.870058680Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/033f81a233fecc91355b3cf63cfbba982e5bb48b589e41d2c2ddad32ad03add0 pid=4102 runtime=io.containerd.runc.v2 Feb 9 19:02:57.906687 systemd[1]: Started cri-containerd-033f81a233fecc91355b3cf63cfbba982e5bb48b589e41d2c2ddad32ad03add0.scope. Feb 9 19:02:57.923371 env[1647]: time="2024-02-09T19:02:57.923193674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:57.923371 env[1647]: time="2024-02-09T19:02:57.923321258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:57.923735 env[1647]: time="2024-02-09T19:02:57.923372434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:57.924631 env[1647]: time="2024-02-09T19:02:57.923723560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916 pid=4132 runtime=io.containerd.runc.v2 Feb 9 19:02:57.972500 systemd[1]: Started cri-containerd-e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916.scope. Feb 9 19:02:57.990979 systemd[1]: run-containerd-runc-k8s.io-e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916-runc.o5oW0D.mount: Deactivated successfully. Feb 9 19:02:58.117436 env[1647]: time="2024-02-09T19:02:58.117389917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-bwx97,Uid:65860731-9f69-4a23-afce-1c0a77839e67,Namespace:kube-system,Attempt:0,} returns sandbox id \"033f81a233fecc91355b3cf63cfbba982e5bb48b589e41d2c2ddad32ad03add0\"" Feb 9 19:02:58.128891 env[1647]: time="2024-02-09T19:02:58.127608276Z" level=info msg="CreateContainer within sandbox \"033f81a233fecc91355b3cf63cfbba982e5bb48b589e41d2c2ddad32ad03add0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:02:58.183939 env[1647]: time="2024-02-09T19:02:58.183894287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-gq7c9,Uid:c5951592-3c9b-43af-ae8e-e933bee2f4a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916\"" Feb 9 19:02:58.189924 env[1647]: time="2024-02-09T19:02:58.186875925Z" level=info msg="CreateContainer within sandbox \"033f81a233fecc91355b3cf63cfbba982e5bb48b589e41d2c2ddad32ad03add0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4149023bb6aae2105ddc538efaac9258a30bcdf98e69b561c1fc4334e0572a15\"" Feb 9 19:02:58.194612 env[1647]: time="2024-02-09T19:02:58.194274862Z" level=info msg="StartContainer for \"4149023bb6aae2105ddc538efaac9258a30bcdf98e69b561c1fc4334e0572a15\"" Feb 9 19:02:58.204866 env[1647]: time="2024-02-09T19:02:58.204815403Z" level=info msg="CreateContainer within sandbox \"e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:02:58.231811 env[1647]: time="2024-02-09T19:02:58.231764739Z" level=info msg="CreateContainer within sandbox \"e1046f418b8bf7cfaa086937b7bf6fa3c17260e8a54404214bfdf58c1a39b916\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa2538f44dabbc33fa9172ee3cd6b950fb9b5d11aee48d292d4f63462a93f1bb\"" Feb 9 19:02:58.232745 systemd[1]: Started cri-containerd-4149023bb6aae2105ddc538efaac9258a30bcdf98e69b561c1fc4334e0572a15.scope. Feb 9 19:02:58.240143 env[1647]: time="2024-02-09T19:02:58.240101834Z" level=info msg="StartContainer for \"aa2538f44dabbc33fa9172ee3cd6b950fb9b5d11aee48d292d4f63462a93f1bb\"" Feb 9 19:02:58.277675 systemd[1]: Started cri-containerd-aa2538f44dabbc33fa9172ee3cd6b950fb9b5d11aee48d292d4f63462a93f1bb.scope. Feb 9 19:02:58.330201 env[1647]: time="2024-02-09T19:02:58.330145582Z" level=info msg="StartContainer for \"4149023bb6aae2105ddc538efaac9258a30bcdf98e69b561c1fc4334e0572a15\" returns successfully" Feb 9 19:02:58.360635 env[1647]: time="2024-02-09T19:02:58.360583529Z" level=info msg="StartContainer for \"aa2538f44dabbc33fa9172ee3cd6b950fb9b5d11aee48d292d4f63462a93f1bb\" returns successfully" Feb 9 19:02:58.591213 kubelet[2662]: I0209 19:02:58.591076 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-bwx97" podStartSLOduration=35.591013461 pod.CreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:58.587495607 +0000 UTC m=+49.675417353" watchObservedRunningTime="2024-02-09 19:02:58.591013461 +0000 UTC m=+49.678935208" Feb 9 19:02:59.599447 kubelet[2662]: I0209 19:02:59.599415 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-gq7c9" podStartSLOduration=36.599365865 pod.CreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:58.625189889 +0000 UTC m=+49.713111639" watchObservedRunningTime="2024-02-09 19:02:59.599365865 +0000 UTC m=+50.687287608" Feb 9 19:03:08.370196 systemd[1]: Started sshd@5-172.31.31.36:22-139.178.68.195:57974.service. Feb 9 19:03:08.565423 sshd[4334]: Accepted publickey for core from 139.178.68.195 port 57974 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:08.569570 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:08.577167 systemd[1]: Started session-6.scope. Feb 9 19:03:08.578403 systemd-logind[1634]: New session 6 of user core. Feb 9 19:03:08.887722 sshd[4334]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:08.894325 systemd[1]: sshd@5-172.31.31.36:22-139.178.68.195:57974.service: Deactivated successfully. Feb 9 19:03:08.895490 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:08.896584 systemd-logind[1634]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:08.898164 systemd-logind[1634]: Removed session 6. Feb 9 19:03:13.917387 systemd[1]: Started sshd@6-172.31.31.36:22-139.178.68.195:57984.service. Feb 9 19:03:14.108609 sshd[4349]: Accepted publickey for core from 139.178.68.195 port 57984 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:14.110765 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:14.118099 systemd[1]: Started session-7.scope. Feb 9 19:03:14.119132 systemd-logind[1634]: New session 7 of user core. Feb 9 19:03:14.387299 sshd[4349]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:14.391747 systemd[1]: sshd@6-172.31.31.36:22-139.178.68.195:57984.service: Deactivated successfully. Feb 9 19:03:14.392922 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:03:14.394769 systemd-logind[1634]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:03:14.398999 systemd-logind[1634]: Removed session 7. Feb 9 19:03:19.415736 systemd[1]: Started sshd@7-172.31.31.36:22-139.178.68.195:43730.service. Feb 9 19:03:19.587750 sshd[4363]: Accepted publickey for core from 139.178.68.195 port 43730 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:19.591208 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:19.611454 systemd[1]: Started session-8.scope. Feb 9 19:03:19.613330 systemd-logind[1634]: New session 8 of user core. Feb 9 19:03:19.899066 sshd[4363]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:19.904541 systemd[1]: sshd@7-172.31.31.36:22-139.178.68.195:43730.service: Deactivated successfully. Feb 9 19:03:19.905621 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:03:19.906622 systemd-logind[1634]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:03:19.907991 systemd-logind[1634]: Removed session 8. Feb 9 19:03:24.930280 systemd[1]: Started sshd@8-172.31.31.36:22-139.178.68.195:43734.service. Feb 9 19:03:25.100079 sshd[4377]: Accepted publickey for core from 139.178.68.195 port 43734 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:25.101871 sshd[4377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:25.109686 systemd[1]: Started session-9.scope. Feb 9 19:03:25.110670 systemd-logind[1634]: New session 9 of user core. Feb 9 19:03:25.383103 sshd[4377]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:25.386930 systemd-logind[1634]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:03:25.387900 systemd[1]: sshd@8-172.31.31.36:22-139.178.68.195:43734.service: Deactivated successfully. Feb 9 19:03:25.388874 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:03:25.390218 systemd-logind[1634]: Removed session 9. Feb 9 19:03:30.411124 systemd[1]: Started sshd@9-172.31.31.36:22-139.178.68.195:39068.service. Feb 9 19:03:30.584277 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 39068 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:30.586393 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:30.593056 systemd[1]: Started session-10.scope. Feb 9 19:03:30.594404 systemd-logind[1634]: New session 10 of user core. Feb 9 19:03:30.856825 sshd[4392]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:30.860750 systemd-logind[1634]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:03:30.860970 systemd[1]: sshd@9-172.31.31.36:22-139.178.68.195:39068.service: Deactivated successfully. Feb 9 19:03:30.862035 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:03:30.863080 systemd-logind[1634]: Removed session 10. Feb 9 19:03:35.888942 systemd[1]: Started sshd@10-172.31.31.36:22-139.178.68.195:39072.service. Feb 9 19:03:36.065082 sshd[4404]: Accepted publickey for core from 139.178.68.195 port 39072 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:36.066689 sshd[4404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:36.079067 systemd[1]: Started session-11.scope. Feb 9 19:03:36.079888 systemd-logind[1634]: New session 11 of user core. Feb 9 19:03:36.286469 sshd[4404]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:36.289820 systemd[1]: sshd@10-172.31.31.36:22-139.178.68.195:39072.service: Deactivated successfully. Feb 9 19:03:36.290908 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:03:36.292000 systemd-logind[1634]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:03:36.292920 systemd-logind[1634]: Removed session 11. Feb 9 19:03:41.313923 systemd[1]: Started sshd@11-172.31.31.36:22-139.178.68.195:57012.service. Feb 9 19:03:41.485410 sshd[4417]: Accepted publickey for core from 139.178.68.195 port 57012 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:41.487053 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:41.497616 systemd[1]: Started session-12.scope. Feb 9 19:03:41.498156 systemd-logind[1634]: New session 12 of user core. Feb 9 19:03:41.709797 sshd[4417]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:41.715066 systemd[1]: sshd@11-172.31.31.36:22-139.178.68.195:57012.service: Deactivated successfully. Feb 9 19:03:41.717129 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:03:41.717771 systemd-logind[1634]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:03:41.718769 systemd-logind[1634]: Removed session 12. Feb 9 19:03:46.740780 systemd[1]: Started sshd@12-172.31.31.36:22-139.178.68.195:43050.service. Feb 9 19:03:46.911889 sshd[4430]: Accepted publickey for core from 139.178.68.195 port 43050 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:46.913817 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:46.927527 systemd-logind[1634]: New session 13 of user core. Feb 9 19:03:46.928833 systemd[1]: Started session-13.scope. Feb 9 19:03:47.204076 sshd[4430]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:47.207921 systemd[1]: sshd@12-172.31.31.36:22-139.178.68.195:43050.service: Deactivated successfully. Feb 9 19:03:47.208964 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:03:47.209537 systemd-logind[1634]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:03:47.210554 systemd-logind[1634]: Removed session 13. Feb 9 19:03:52.234268 systemd[1]: Started sshd@13-172.31.31.36:22-139.178.68.195:43054.service. Feb 9 19:03:52.412017 sshd[4442]: Accepted publickey for core from 139.178.68.195 port 43054 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:52.413765 sshd[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:52.419500 systemd[1]: Started session-14.scope. Feb 9 19:03:52.420220 systemd-logind[1634]: New session 14 of user core. Feb 9 19:03:52.635721 sshd[4442]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:52.640781 systemd[1]: sshd@13-172.31.31.36:22-139.178.68.195:43054.service: Deactivated successfully. Feb 9 19:03:52.641866 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:03:52.642521 systemd-logind[1634]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:03:52.644077 systemd-logind[1634]: Removed session 14. Feb 9 19:03:52.662318 systemd[1]: Started sshd@14-172.31.31.36:22-139.178.68.195:43056.service. Feb 9 19:03:52.830723 sshd[4454]: Accepted publickey for core from 139.178.68.195 port 43056 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:52.832233 sshd[4454]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:52.837897 systemd[1]: Started session-15.scope. Feb 9 19:03:52.838409 systemd-logind[1634]: New session 15 of user core. Feb 9 19:03:54.203481 sshd[4454]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:54.213926 systemd[1]: sshd@14-172.31.31.36:22-139.178.68.195:43056.service: Deactivated successfully. Feb 9 19:03:54.215233 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:03:54.217201 systemd-logind[1634]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:03:54.218498 systemd-logind[1634]: Removed session 15. Feb 9 19:03:54.231229 systemd[1]: Started sshd@15-172.31.31.36:22-139.178.68.195:43060.service. Feb 9 19:03:54.423893 sshd[4466]: Accepted publickey for core from 139.178.68.195 port 43060 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:54.428873 sshd[4466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:54.437978 systemd[1]: Started session-16.scope. Feb 9 19:03:54.438650 systemd-logind[1634]: New session 16 of user core. Feb 9 19:03:54.741541 sshd[4466]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:54.746463 systemd-logind[1634]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:03:54.746617 systemd[1]: sshd@15-172.31.31.36:22-139.178.68.195:43060.service: Deactivated successfully. Feb 9 19:03:54.747889 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:03:54.750112 systemd-logind[1634]: Removed session 16. Feb 9 19:03:59.769580 systemd[1]: Started sshd@16-172.31.31.36:22-139.178.68.195:43958.service. Feb 9 19:03:59.947045 sshd[4481]: Accepted publickey for core from 139.178.68.195 port 43958 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:03:59.948683 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:59.955017 systemd[1]: Started session-17.scope. Feb 9 19:03:59.955872 systemd-logind[1634]: New session 17 of user core. Feb 9 19:04:00.173966 sshd[4481]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:00.178645 systemd[1]: sshd@16-172.31.31.36:22-139.178.68.195:43958.service: Deactivated successfully. Feb 9 19:04:00.180332 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:04:00.181709 systemd-logind[1634]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:04:00.182983 systemd-logind[1634]: Removed session 17. Feb 9 19:04:05.206642 systemd[1]: Started sshd@17-172.31.31.36:22-139.178.68.195:43970.service. Feb 9 19:04:05.388325 sshd[4492]: Accepted publickey for core from 139.178.68.195 port 43970 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:05.390627 sshd[4492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:05.399408 systemd-logind[1634]: New session 18 of user core. Feb 9 19:04:05.400982 systemd[1]: Started session-18.scope. Feb 9 19:04:05.634962 sshd[4492]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:05.639781 systemd-logind[1634]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:04:05.639984 systemd[1]: sshd@17-172.31.31.36:22-139.178.68.195:43970.service: Deactivated successfully. Feb 9 19:04:05.640988 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:04:05.642616 systemd-logind[1634]: Removed session 18. Feb 9 19:04:10.664531 systemd[1]: Started sshd@18-172.31.31.36:22-139.178.68.195:43216.service. Feb 9 19:04:10.836974 sshd[4506]: Accepted publickey for core from 139.178.68.195 port 43216 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:10.839204 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:10.855873 systemd-logind[1634]: New session 19 of user core. Feb 9 19:04:10.858072 systemd[1]: Started session-19.scope. Feb 9 19:04:11.107562 sshd[4506]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:11.111038 systemd[1]: sshd@18-172.31.31.36:22-139.178.68.195:43216.service: Deactivated successfully. Feb 9 19:04:11.112101 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:04:11.113078 systemd-logind[1634]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:04:11.114060 systemd-logind[1634]: Removed session 19. Feb 9 19:04:11.145212 systemd[1]: Started sshd@19-172.31.31.36:22-139.178.68.195:43222.service. Feb 9 19:04:11.314628 sshd[4519]: Accepted publickey for core from 139.178.68.195 port 43222 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:11.316983 sshd[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:11.325146 systemd[1]: Started session-20.scope. Feb 9 19:04:11.326033 systemd-logind[1634]: New session 20 of user core. Feb 9 19:04:12.039093 sshd[4519]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:12.044696 systemd-logind[1634]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:04:12.044936 systemd[1]: sshd@19-172.31.31.36:22-139.178.68.195:43222.service: Deactivated successfully. Feb 9 19:04:12.046187 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:04:12.047922 systemd-logind[1634]: Removed session 20. Feb 9 19:04:12.073926 systemd[1]: Started sshd@20-172.31.31.36:22-139.178.68.195:43224.service. Feb 9 19:04:12.264164 sshd[4529]: Accepted publickey for core from 139.178.68.195 port 43224 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:12.265755 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:12.275574 systemd-logind[1634]: New session 21 of user core. Feb 9 19:04:12.275995 systemd[1]: Started session-21.scope. Feb 9 19:04:13.716327 sshd[4529]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:13.729839 systemd[1]: sshd@20-172.31.31.36:22-139.178.68.195:43224.service: Deactivated successfully. Feb 9 19:04:13.731059 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:04:13.732640 systemd-logind[1634]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:04:13.734290 systemd-logind[1634]: Removed session 21. Feb 9 19:04:13.742670 systemd[1]: Started sshd@21-172.31.31.36:22-139.178.68.195:43238.service. Feb 9 19:04:13.940415 sshd[4555]: Accepted publickey for core from 139.178.68.195 port 43238 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:13.941606 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:13.963819 systemd-logind[1634]: New session 22 of user core. Feb 9 19:04:13.965328 systemd[1]: Started session-22.scope. Feb 9 19:04:14.458708 sshd[4555]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:14.462319 systemd[1]: sshd@21-172.31.31.36:22-139.178.68.195:43238.service: Deactivated successfully. Feb 9 19:04:14.464452 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:04:14.465508 systemd-logind[1634]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:04:14.467553 systemd-logind[1634]: Removed session 22. Feb 9 19:04:14.486908 systemd[1]: Started sshd@22-172.31.31.36:22-139.178.68.195:43240.service. Feb 9 19:04:14.656385 sshd[4604]: Accepted publickey for core from 139.178.68.195 port 43240 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:14.658034 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:14.665129 systemd[1]: Started session-23.scope. Feb 9 19:04:14.667965 systemd-logind[1634]: New session 23 of user core. Feb 9 19:04:14.893109 sshd[4604]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:14.909112 systemd[1]: sshd@22-172.31.31.36:22-139.178.68.195:43240.service: Deactivated successfully. Feb 9 19:04:14.910249 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:04:14.912147 systemd-logind[1634]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:04:14.913444 systemd-logind[1634]: Removed session 23. Feb 9 19:04:19.927821 systemd[1]: Started sshd@23-172.31.31.36:22-139.178.68.195:59966.service. Feb 9 19:04:20.100521 sshd[4616]: Accepted publickey for core from 139.178.68.195 port 59966 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:20.102381 sshd[4616]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:20.113791 systemd-logind[1634]: New session 24 of user core. Feb 9 19:04:20.114951 systemd[1]: Started session-24.scope. Feb 9 19:04:20.390669 sshd[4616]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:20.402668 systemd-logind[1634]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:04:20.406016 systemd[1]: sshd@23-172.31.31.36:22-139.178.68.195:59966.service: Deactivated successfully. Feb 9 19:04:20.407372 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:04:20.414907 systemd-logind[1634]: Removed session 24. Feb 9 19:04:25.418275 systemd[1]: Started sshd@24-172.31.31.36:22-139.178.68.195:59978.service. Feb 9 19:04:25.595463 sshd[4655]: Accepted publickey for core from 139.178.68.195 port 59978 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:25.597342 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:25.604548 systemd-logind[1634]: New session 25 of user core. Feb 9 19:04:25.605707 systemd[1]: Started session-25.scope. Feb 9 19:04:25.847006 sshd[4655]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:25.850456 systemd[1]: sshd@24-172.31.31.36:22-139.178.68.195:59978.service: Deactivated successfully. Feb 9 19:04:25.851398 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:04:25.852457 systemd-logind[1634]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:04:25.853475 systemd-logind[1634]: Removed session 25. Feb 9 19:04:30.884182 systemd[1]: Started sshd@25-172.31.31.36:22-139.178.68.195:37068.service. Feb 9 19:04:31.050771 sshd[4669]: Accepted publickey for core from 139.178.68.195 port 37068 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:31.052636 sshd[4669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:31.062934 systemd-logind[1634]: New session 26 of user core. Feb 9 19:04:31.064504 systemd[1]: Started session-26.scope. Feb 9 19:04:31.321645 sshd[4669]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:31.327635 systemd-logind[1634]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:04:31.332004 systemd[1]: sshd@25-172.31.31.36:22-139.178.68.195:37068.service: Deactivated successfully. Feb 9 19:04:31.333684 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:04:31.335967 systemd-logind[1634]: Removed session 26. Feb 9 19:04:36.352970 systemd[1]: Started sshd@26-172.31.31.36:22-139.178.68.195:33528.service. Feb 9 19:04:36.534121 sshd[4681]: Accepted publickey for core from 139.178.68.195 port 33528 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:36.536142 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:36.543453 systemd-logind[1634]: New session 27 of user core. Feb 9 19:04:36.545552 systemd[1]: Started session-27.scope. Feb 9 19:04:36.788333 sshd[4681]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:36.792974 systemd[1]: sshd@26-172.31.31.36:22-139.178.68.195:33528.service: Deactivated successfully. Feb 9 19:04:36.795870 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:04:36.798758 systemd-logind[1634]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:04:36.825248 systemd-logind[1634]: Removed session 27. Feb 9 19:04:36.829908 systemd[1]: Started sshd@27-172.31.31.36:22-139.178.68.195:33536.service. Feb 9 19:04:36.992727 sshd[4695]: Accepted publickey for core from 139.178.68.195 port 33536 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:36.994281 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:37.002210 systemd[1]: Started session-28.scope. Feb 9 19:04:37.003514 systemd-logind[1634]: New session 28 of user core. Feb 9 19:04:39.157010 env[1647]: time="2024-02-09T19:04:39.156956261Z" level=info msg="StopContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" with timeout 30 (s)" Feb 9 19:04:39.161438 env[1647]: time="2024-02-09T19:04:39.158125462Z" level=info msg="Stop container \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" with signal terminated" Feb 9 19:04:39.168220 systemd[1]: run-containerd-runc-k8s.io-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de-runc.hFZvli.mount: Deactivated successfully. Feb 9 19:04:39.194984 systemd[1]: cri-containerd-a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14.scope: Deactivated successfully. Feb 9 19:04:39.245592 env[1647]: time="2024-02-09T19:04:39.245480526Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:04:39.250390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14-rootfs.mount: Deactivated successfully. Feb 9 19:04:39.255754 env[1647]: time="2024-02-09T19:04:39.255715651Z" level=info msg="StopContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" with timeout 1 (s)" Feb 9 19:04:39.256129 env[1647]: time="2024-02-09T19:04:39.255973548Z" level=info msg="Stop container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" with signal terminated" Feb 9 19:04:39.266165 systemd-networkd[1455]: lxc_health: Link DOWN Feb 9 19:04:39.266180 systemd-networkd[1455]: lxc_health: Lost carrier Feb 9 19:04:39.275523 env[1647]: time="2024-02-09T19:04:39.275454691Z" level=info msg="shim disconnected" id=a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14 Feb 9 19:04:39.275523 env[1647]: time="2024-02-09T19:04:39.275509179Z" level=warning msg="cleaning up after shim disconnected" id=a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14 namespace=k8s.io Feb 9 19:04:39.275523 env[1647]: time="2024-02-09T19:04:39.275522578Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.294333 env[1647]: time="2024-02-09T19:04:39.294291581Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4750 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.300049 env[1647]: time="2024-02-09T19:04:39.300008214Z" level=info msg="StopContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" returns successfully" Feb 9 19:04:39.301927 env[1647]: time="2024-02-09T19:04:39.301896057Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:04:39.308569 env[1647]: time="2024-02-09T19:04:39.302314396Z" level=info msg="Container to stop \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.307390 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa-shm.mount: Deactivated successfully. Feb 9 19:04:39.328324 kubelet[2662]: E0209 19:04:39.328221 2662 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:04:39.398623 systemd[1]: cri-containerd-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de.scope: Deactivated successfully. Feb 9 19:04:39.399013 systemd[1]: cri-containerd-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de.scope: Consumed 8.993s CPU time. Feb 9 19:04:39.412144 systemd[1]: cri-containerd-56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa.scope: Deactivated successfully. Feb 9 19:04:39.440990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de-rootfs.mount: Deactivated successfully. Feb 9 19:04:39.458347 env[1647]: time="2024-02-09T19:04:39.458295142Z" level=info msg="shim disconnected" id=56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa Feb 9 19:04:39.458862 env[1647]: time="2024-02-09T19:04:39.458767948Z" level=warning msg="cleaning up after shim disconnected" id=56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa namespace=k8s.io Feb 9 19:04:39.459013 env[1647]: time="2024-02-09T19:04:39.458994486Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.459487 env[1647]: time="2024-02-09T19:04:39.458297490Z" level=info msg="shim disconnected" id=6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de Feb 9 19:04:39.460094 env[1647]: time="2024-02-09T19:04:39.459645308Z" level=warning msg="cleaning up after shim disconnected" id=6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de namespace=k8s.io Feb 9 19:04:39.460094 env[1647]: time="2024-02-09T19:04:39.459664191Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.471395 env[1647]: time="2024-02-09T19:04:39.471320521Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4799 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.472012 env[1647]: time="2024-02-09T19:04:39.471970631Z" level=info msg="TearDown network for sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" successfully" Feb 9 19:04:39.472190 env[1647]: time="2024-02-09T19:04:39.472160131Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" returns successfully" Feb 9 19:04:39.480070 env[1647]: time="2024-02-09T19:04:39.480030804Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4800 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.483059 env[1647]: time="2024-02-09T19:04:39.483009482Z" level=info msg="StopContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" returns successfully" Feb 9 19:04:39.483475 env[1647]: time="2024-02-09T19:04:39.483427098Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:04:39.483577 env[1647]: time="2024-02-09T19:04:39.483504835Z" level=info msg="Container to stop \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.483577 env[1647]: time="2024-02-09T19:04:39.483524688Z" level=info msg="Container to stop \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.483577 env[1647]: time="2024-02-09T19:04:39.483542442Z" level=info msg="Container to stop \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.483577 env[1647]: time="2024-02-09T19:04:39.483559098Z" level=info msg="Container to stop \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.483773 env[1647]: time="2024-02-09T19:04:39.483573967Z" level=info msg="Container to stop \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:39.492219 systemd[1]: cri-containerd-78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a.scope: Deactivated successfully. Feb 9 19:04:39.539937 env[1647]: time="2024-02-09T19:04:39.539885800Z" level=info msg="shim disconnected" id=78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a Feb 9 19:04:39.540504 env[1647]: time="2024-02-09T19:04:39.540468294Z" level=warning msg="cleaning up after shim disconnected" id=78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a namespace=k8s.io Feb 9 19:04:39.540718 env[1647]: time="2024-02-09T19:04:39.540697382Z" level=info msg="cleaning up dead shim" Feb 9 19:04:39.553797 env[1647]: time="2024-02-09T19:04:39.553751696Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4845 runtime=io.containerd.runc.v2\n" Feb 9 19:04:39.555533 env[1647]: time="2024-02-09T19:04:39.554304452Z" level=info msg="TearDown network for sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" successfully" Feb 9 19:04:39.555533 env[1647]: time="2024-02-09T19:04:39.554339000Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" returns successfully" Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659802 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hostproc\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659855 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-run\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659891 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvgn\" (UniqueName: \"kubernetes.io/projected/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-kube-api-access-gjvgn\") pod \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\" (UID: \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\") " Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659920 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-cgroup\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659949 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-clustermesh-secrets\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660385 kubelet[2662]: I0209 19:04:39.659968 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-bpf-maps\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.659996 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-cilium-config-path\") pod \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\" (UID: \"fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.660013 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-etc-cni-netd\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.660029 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cni-path\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.660047 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-config-path\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.660072 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-lib-modules\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.660921 kubelet[2662]: I0209 19:04:39.660185 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hubble-tls\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.661169 kubelet[2662]: I0209 19:04:39.660206 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-kernel\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.661169 kubelet[2662]: I0209 19:04:39.660223 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-xtables-lock\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.661169 kubelet[2662]: I0209 19:04:39.660254 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shfjf\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-kube-api-access-shfjf\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.661169 kubelet[2662]: I0209 19:04:39.660274 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-net\") pod \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\" (UID: \"edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe\") " Feb 9 19:04:39.662301 kubelet[2662]: I0209 19:04:39.661295 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.662301 kubelet[2662]: I0209 19:04:39.662039 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.662301 kubelet[2662]: I0209 19:04:39.660654 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hostproc" (OuterVolumeSpecName: "hostproc") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.662301 kubelet[2662]: I0209 19:04:39.662069 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.662301 kubelet[2662]: I0209 19:04:39.662069 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cni-path" (OuterVolumeSpecName: "cni-path") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.664122 kubelet[2662]: W0209 19:04:39.663921 2662 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:04:39.666626 kubelet[2662]: I0209 19:04:39.666585 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.667232 kubelet[2662]: I0209 19:04:39.667195 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.667911 kubelet[2662]: I0209 19:04:39.667640 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.667911 kubelet[2662]: I0209 19:04:39.667675 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.668053 kubelet[2662]: I0209 19:04:39.667998 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:39.668326 kubelet[2662]: W0209 19:04:39.668289 2662 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:04:39.672903 kubelet[2662]: I0209 19:04:39.672858 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:39.673557 kubelet[2662]: I0209 19:04:39.673524 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec" (UID: "fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:39.679935 kubelet[2662]: I0209 19:04:39.679895 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:39.680388 kubelet[2662]: I0209 19:04:39.680217 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-kube-api-access-gjvgn" (OuterVolumeSpecName: "kube-api-access-gjvgn") pod "fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec" (UID: "fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec"). InnerVolumeSpecName "kube-api-access-gjvgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:39.681289 kubelet[2662]: I0209 19:04:39.681260 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-kube-api-access-shfjf" (OuterVolumeSpecName: "kube-api-access-shfjf") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "kube-api-access-shfjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:39.682641 kubelet[2662]: I0209 19:04:39.682604 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" (UID: "edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:39.760943 kubelet[2662]: I0209 19:04:39.760892 2662 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-kernel\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.760943 kubelet[2662]: I0209 19:04:39.760939 2662 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-shfjf\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-kube-api-access-shfjf\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.760985 2662 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-xtables-lock\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761043 2662 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-host-proc-sys-net\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761059 2662 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hostproc\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761072 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-run\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761165 2662 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-gjvgn\" (UniqueName: \"kubernetes.io/projected/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-kube-api-access-gjvgn\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761213 2662 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-clustermesh-secrets\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761233 2662 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-bpf-maps\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761422 kubelet[2662]: I0209 19:04:39.761246 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-cgroup\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761262 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec-cilium-config-path\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761308 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cilium-config-path\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761322 2662 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-etc-cni-netd\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761336 2662 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-cni-path\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761402 2662 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-lib-modules\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.761752 kubelet[2662]: I0209 19:04:39.761422 2662 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe-hubble-tls\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:39.905613 kubelet[2662]: I0209 19:04:39.905582 2662 scope.go:115] "RemoveContainer" containerID="a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14" Feb 9 19:04:39.909669 env[1647]: time="2024-02-09T19:04:39.909270784Z" level=info msg="RemoveContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\"" Feb 9 19:04:39.917137 systemd[1]: Removed slice kubepods-besteffort-podfb7e20d8_3f8c_4f65_a0ab_bd3e293632ec.slice. Feb 9 19:04:39.926441 systemd[1]: Removed slice kubepods-burstable-podedc7c2b6_12e1_4a6d_a9cd_a691fd2c04fe.slice. Feb 9 19:04:39.926574 systemd[1]: kubepods-burstable-podedc7c2b6_12e1_4a6d_a9cd_a691fd2c04fe.slice: Consumed 9.110s CPU time. Feb 9 19:04:39.970764 env[1647]: time="2024-02-09T19:04:39.970616879Z" level=info msg="RemoveContainer for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" returns successfully" Feb 9 19:04:39.971479 kubelet[2662]: I0209 19:04:39.971457 2662 scope.go:115] "RemoveContainer" containerID="a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14" Feb 9 19:04:39.972252 env[1647]: time="2024-02-09T19:04:39.972156145Z" level=error msg="ContainerStatus for \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\": not found" Feb 9 19:04:39.975520 kubelet[2662]: E0209 19:04:39.975497 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\": not found" containerID="a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14" Feb 9 19:04:39.976320 kubelet[2662]: I0209 19:04:39.976305 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14} err="failed to get container status \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\": rpc error: code = NotFound desc = an error occurred when try to find container \"a00368b6cc56c31904c273a4f86ac10f4df5e5345250fefe271cfd0efe81be14\": not found" Feb 9 19:04:39.976591 kubelet[2662]: I0209 19:04:39.976577 2662 scope.go:115] "RemoveContainer" containerID="6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de" Feb 9 19:04:39.978253 env[1647]: time="2024-02-09T19:04:39.978214207Z" level=info msg="RemoveContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\"" Feb 9 19:04:39.990041 env[1647]: time="2024-02-09T19:04:39.989992403Z" level=info msg="RemoveContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" returns successfully" Feb 9 19:04:39.990426 kubelet[2662]: I0209 19:04:39.990407 2662 scope.go:115] "RemoveContainer" containerID="1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642" Feb 9 19:04:39.994015 env[1647]: time="2024-02-09T19:04:39.993973190Z" level=info msg="RemoveContainer for \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\"" Feb 9 19:04:40.005981 env[1647]: time="2024-02-09T19:04:40.005427471Z" level=info msg="RemoveContainer for \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\" returns successfully" Feb 9 19:04:40.013715 kubelet[2662]: I0209 19:04:40.013250 2662 scope.go:115] "RemoveContainer" containerID="ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea" Feb 9 19:04:40.016072 env[1647]: time="2024-02-09T19:04:40.016030545Z" level=info msg="RemoveContainer for \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\"" Feb 9 19:04:40.024844 env[1647]: time="2024-02-09T19:04:40.024781403Z" level=info msg="RemoveContainer for \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\" returns successfully" Feb 9 19:04:40.025480 kubelet[2662]: I0209 19:04:40.025458 2662 scope.go:115] "RemoveContainer" containerID="2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61" Feb 9 19:04:40.029244 env[1647]: time="2024-02-09T19:04:40.029206134Z" level=info msg="RemoveContainer for \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\"" Feb 9 19:04:40.034520 env[1647]: time="2024-02-09T19:04:40.034473621Z" level=info msg="RemoveContainer for \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\" returns successfully" Feb 9 19:04:40.035289 kubelet[2662]: I0209 19:04:40.035162 2662 scope.go:115] "RemoveContainer" containerID="d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc" Feb 9 19:04:40.037099 env[1647]: time="2024-02-09T19:04:40.036975823Z" level=info msg="RemoveContainer for \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\"" Feb 9 19:04:40.043549 env[1647]: time="2024-02-09T19:04:40.043508144Z" level=info msg="RemoveContainer for \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\" returns successfully" Feb 9 19:04:40.044498 kubelet[2662]: I0209 19:04:40.044258 2662 scope.go:115] "RemoveContainer" containerID="6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de" Feb 9 19:04:40.045177 env[1647]: time="2024-02-09T19:04:40.045102635Z" level=error msg="ContainerStatus for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": not found" Feb 9 19:04:40.045321 kubelet[2662]: E0209 19:04:40.045306 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": not found" containerID="6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de" Feb 9 19:04:40.045434 kubelet[2662]: I0209 19:04:40.045358 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de} err="failed to get container status \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": rpc error: code = NotFound desc = an error occurred when try to find container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": not found" Feb 9 19:04:40.045434 kubelet[2662]: I0209 19:04:40.045376 2662 scope.go:115] "RemoveContainer" containerID="1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642" Feb 9 19:04:40.045786 env[1647]: time="2024-02-09T19:04:40.045713641Z" level=error msg="ContainerStatus for \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\": not found" Feb 9 19:04:40.045919 kubelet[2662]: E0209 19:04:40.045900 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\": not found" containerID="1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642" Feb 9 19:04:40.046039 kubelet[2662]: I0209 19:04:40.045937 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642} err="failed to get container status \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\": rpc error: code = NotFound desc = an error occurred when try to find container \"1dcc2c6d955d4594cd6ece2c85d50f499c3c136fa3ab508a2b47884f48207642\": not found" Feb 9 19:04:40.046039 kubelet[2662]: I0209 19:04:40.045952 2662 scope.go:115] "RemoveContainer" containerID="ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea" Feb 9 19:04:40.046425 env[1647]: time="2024-02-09T19:04:40.046370139Z" level=error msg="ContainerStatus for \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\": not found" Feb 9 19:04:40.046558 kubelet[2662]: E0209 19:04:40.046548 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\": not found" containerID="ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea" Feb 9 19:04:40.046642 kubelet[2662]: I0209 19:04:40.046579 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea} err="failed to get container status \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\": rpc error: code = NotFound desc = an error occurred when try to find container \"ded5b40b05e08b4b85eb9a7e5126b1beb0300a9ae67edb5578b7fcbbb8cd7eea\": not found" Feb 9 19:04:40.046642 kubelet[2662]: I0209 19:04:40.046593 2662 scope.go:115] "RemoveContainer" containerID="2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61" Feb 9 19:04:40.047596 kubelet[2662]: E0209 19:04:40.047098 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\": not found" containerID="2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61" Feb 9 19:04:40.047596 kubelet[2662]: I0209 19:04:40.047124 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61} err="failed to get container status \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\": not found" Feb 9 19:04:40.047596 kubelet[2662]: I0209 19:04:40.047133 2662 scope.go:115] "RemoveContainer" containerID="d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc" Feb 9 19:04:40.047596 kubelet[2662]: E0209 19:04:40.047586 2662 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\": not found" containerID="d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc" Feb 9 19:04:40.047999 env[1647]: time="2024-02-09T19:04:40.046885940Z" level=error msg="ContainerStatus for \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c80512e3d8b7fadffd7bf4d071f1ec03e5046f652870dec3929d24ea7542e61\": not found" Feb 9 19:04:40.047999 env[1647]: time="2024-02-09T19:04:40.047414447Z" level=error msg="ContainerStatus for \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\": not found" Feb 9 19:04:40.048513 kubelet[2662]: I0209 19:04:40.047614 2662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc} err="failed to get container status \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d33b674dac2deafc141f8c31ec41c8fb124977d812f16dcbac8a7d903e741fbc\": not found" Feb 9 19:04:40.158777 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa-rootfs.mount: Deactivated successfully. Feb 9 19:04:40.158918 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a-rootfs.mount: Deactivated successfully. Feb 9 19:04:40.159002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a-shm.mount: Deactivated successfully. Feb 9 19:04:40.159086 systemd[1]: var-lib-kubelet-pods-edc7c2b6\x2d12e1\x2d4a6d\x2da9cd\x2da691fd2c04fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshfjf.mount: Deactivated successfully. Feb 9 19:04:40.159176 systemd[1]: var-lib-kubelet-pods-fb7e20d8\x2d3f8c\x2d4f65\x2da0ab\x2dbd3e293632ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjvgn.mount: Deactivated successfully. Feb 9 19:04:40.159267 systemd[1]: var-lib-kubelet-pods-edc7c2b6\x2d12e1\x2d4a6d\x2da9cd\x2da691fd2c04fe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:04:40.159392 systemd[1]: var-lib-kubelet-pods-edc7c2b6\x2d12e1\x2d4a6d\x2da9cd\x2da691fd2c04fe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:41.035093 sshd[4695]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:41.050160 systemd[1]: sshd@27-172.31.31.36:22-139.178.68.195:33536.service: Deactivated successfully. Feb 9 19:04:41.052449 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 19:04:41.053388 systemd-logind[1634]: Session 28 logged out. Waiting for processes to exit. Feb 9 19:04:41.054718 systemd-logind[1634]: Removed session 28. Feb 9 19:04:41.062903 systemd[1]: Started sshd@28-172.31.31.36:22-139.178.68.195:33540.service. Feb 9 19:04:41.257964 sshd[4865]: Accepted publickey for core from 139.178.68.195 port 33540 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:41.259684 sshd[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:41.268548 systemd[1]: Started session-29.scope. Feb 9 19:04:41.271107 systemd-logind[1634]: New session 29 of user core. Feb 9 19:04:41.342187 env[1647]: time="2024-02-09T19:04:41.340716339Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:04:41.342187 env[1647]: time="2024-02-09T19:04:41.340908859Z" level=info msg="TearDown network for sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" successfully" Feb 9 19:04:41.342187 env[1647]: time="2024-02-09T19:04:41.340962265Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" returns successfully" Feb 9 19:04:41.342187 env[1647]: time="2024-02-09T19:04:41.340982116Z" level=info msg="StopContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" with timeout 1 (s)" Feb 9 19:04:41.342187 env[1647]: time="2024-02-09T19:04:41.341111227Z" level=error msg="StopContainer for \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": not found" Feb 9 19:04:41.342981 kubelet[2662]: E0209 19:04:41.342960 2662 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de\": not found" containerID="6184820d6c0d012855bb21a8de450ae817822dc2970098456db1fef3ebdd32de" Feb 9 19:04:41.344148 env[1647]: time="2024-02-09T19:04:41.343768153Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:04:41.344148 env[1647]: time="2024-02-09T19:04:41.344052593Z" level=info msg="TearDown network for sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" successfully" Feb 9 19:04:41.344148 env[1647]: time="2024-02-09T19:04:41.344102201Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" returns successfully" Feb 9 19:04:41.345664 kubelet[2662]: I0209 19:04:41.345638 2662 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe path="/var/lib/kubelet/pods/edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe/volumes" Feb 9 19:04:41.347526 kubelet[2662]: I0209 19:04:41.347500 2662 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec path="/var/lib/kubelet/pods/fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec/volumes" Feb 9 19:04:42.020498 sshd[4865]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:42.026066 systemd-logind[1634]: Session 29 logged out. Waiting for processes to exit. Feb 9 19:04:42.029228 systemd[1]: sshd@28-172.31.31.36:22-139.178.68.195:33540.service: Deactivated successfully. Feb 9 19:04:42.030297 systemd[1]: session-29.scope: Deactivated successfully. Feb 9 19:04:42.032760 systemd-logind[1634]: Removed session 29. Feb 9 19:04:42.047063 systemd[1]: Started sshd@29-172.31.31.36:22-139.178.68.195:33548.service. Feb 9 19:04:42.072474 kubelet[2662]: I0209 19:04:42.072427 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:42.072759 kubelet[2662]: E0209 19:04:42.072741 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec" containerName="cilium-operator" Feb 9 19:04:42.072884 kubelet[2662]: E0209 19:04:42.072872 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="cilium-agent" Feb 9 19:04:42.073097 kubelet[2662]: E0209 19:04:42.073084 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="mount-cgroup" Feb 9 19:04:42.073194 kubelet[2662]: E0209 19:04:42.073185 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="mount-bpf-fs" Feb 9 19:04:42.073287 kubelet[2662]: E0209 19:04:42.073277 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="clean-cilium-state" Feb 9 19:04:42.073386 kubelet[2662]: E0209 19:04:42.073376 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="apply-sysctl-overwrites" Feb 9 19:04:42.073527 kubelet[2662]: I0209 19:04:42.073516 2662 memory_manager.go:346] "RemoveStaleState removing state" podUID="edc7c2b6-12e1-4a6d-a9cd-a691fd2c04fe" containerName="cilium-agent" Feb 9 19:04:42.073624 kubelet[2662]: I0209 19:04:42.073613 2662 memory_manager.go:346] "RemoveStaleState removing state" podUID="fb7e20d8-3f8c-4f65-a0ab-bd3e293632ec" containerName="cilium-operator" Feb 9 19:04:42.087201 systemd[1]: Created slice kubepods-burstable-pod2a46a6c7_00be_4f9d_b5b1_a16b0085c086.slice. Feb 9 19:04:42.177085 kubelet[2662]: I0209 19:04:42.177020 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-net\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.177326 kubelet[2662]: I0209 19:04:42.177311 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncqsx\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-kube-api-access-ncqsx\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.177462 kubelet[2662]: I0209 19:04:42.177450 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-run\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.177559 kubelet[2662]: I0209 19:04:42.177550 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hostproc\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.177804 kubelet[2662]: I0209 19:04:42.177787 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-ipsec-secrets\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.177950 kubelet[2662]: I0209 19:04:42.177938 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-lib-modules\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178048 kubelet[2662]: I0209 19:04:42.178039 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cni-path\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178143 kubelet[2662]: I0209 19:04:42.178134 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-bpf-maps\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178233 kubelet[2662]: I0209 19:04:42.178224 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-xtables-lock\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178339 kubelet[2662]: I0209 19:04:42.178328 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-cgroup\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178565 kubelet[2662]: I0209 19:04:42.178549 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-kernel\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178672 kubelet[2662]: I0209 19:04:42.178661 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hubble-tls\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178764 kubelet[2662]: I0209 19:04:42.178754 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-etc-cni-netd\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.178892 kubelet[2662]: I0209 19:04:42.178881 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-config-path\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.179027 kubelet[2662]: I0209 19:04:42.179015 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-clustermesh-secrets\") pod \"cilium-l9q9b\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " pod="kube-system/cilium-l9q9b" Feb 9 19:04:42.205470 kubelet[2662]: I0209 19:04:42.205446 2662 setters.go:548] "Node became not ready" node="ip-172-31-31-36" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:04:42.205239212 +0000 UTC m=+153.293160950 LastTransitionTime:2024-02-09 19:04:42.205239212 +0000 UTC m=+153.293160950 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:04:42.220439 sshd[4878]: Accepted publickey for core from 139.178.68.195 port 33548 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:42.223029 sshd[4878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:42.232763 systemd-logind[1634]: New session 30 of user core. Feb 9 19:04:42.233954 systemd[1]: Started session-30.scope. Feb 9 19:04:42.419381 env[1647]: time="2024-02-09T19:04:42.419250353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9q9b,Uid:2a46a6c7-00be-4f9d-b5b1-a16b0085c086,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:42.447521 env[1647]: time="2024-02-09T19:04:42.447434466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:42.447723 env[1647]: time="2024-02-09T19:04:42.447697465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:42.447862 env[1647]: time="2024-02-09T19:04:42.447841113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:42.448132 env[1647]: time="2024-02-09T19:04:42.448102576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7 pid=4897 runtime=io.containerd.runc.v2 Feb 9 19:04:42.476445 systemd[1]: Started cri-containerd-e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7.scope. Feb 9 19:04:42.516488 sshd[4878]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:42.524182 systemd[1]: sshd@29-172.31.31.36:22-139.178.68.195:33548.service: Deactivated successfully. Feb 9 19:04:42.526046 systemd[1]: session-30.scope: Deactivated successfully. Feb 9 19:04:42.526330 systemd-logind[1634]: Session 30 logged out. Waiting for processes to exit. Feb 9 19:04:42.528104 systemd-logind[1634]: Removed session 30. Feb 9 19:04:42.549420 systemd[1]: Started sshd@30-172.31.31.36:22-139.178.68.195:33564.service. Feb 9 19:04:42.582534 env[1647]: time="2024-02-09T19:04:42.582485732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9q9b,Uid:2a46a6c7-00be-4f9d-b5b1-a16b0085c086,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\"" Feb 9 19:04:42.588655 env[1647]: time="2024-02-09T19:04:42.588540102Z" level=info msg="CreateContainer within sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:42.605399 env[1647]: time="2024-02-09T19:04:42.605328047Z" level=info msg="CreateContainer within sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\"" Feb 9 19:04:42.606326 env[1647]: time="2024-02-09T19:04:42.606261988Z" level=info msg="StartContainer for \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\"" Feb 9 19:04:42.630697 systemd[1]: Started cri-containerd-124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030.scope. Feb 9 19:04:42.644384 systemd[1]: cri-containerd-124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030.scope: Deactivated successfully. Feb 9 19:04:42.713518 env[1647]: time="2024-02-09T19:04:42.713409121Z" level=info msg="shim disconnected" id=124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030 Feb 9 19:04:42.713518 env[1647]: time="2024-02-09T19:04:42.713516363Z" level=warning msg="cleaning up after shim disconnected" id=124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030 namespace=k8s.io Feb 9 19:04:42.713847 env[1647]: time="2024-02-09T19:04:42.713529279Z" level=info msg="cleaning up dead shim" Feb 9 19:04:42.724704 env[1647]: time="2024-02-09T19:04:42.724652590Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4962 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:04:42Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:04:42.725059 env[1647]: time="2024-02-09T19:04:42.724938702Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Feb 9 19:04:42.726516 env[1647]: time="2024-02-09T19:04:42.726465962Z" level=error msg="Failed to pipe stdout of container \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\"" error="reading from a closed fifo" Feb 9 19:04:42.726723 env[1647]: time="2024-02-09T19:04:42.726681891Z" level=error msg="Failed to pipe stderr of container \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\"" error="reading from a closed fifo" Feb 9 19:04:42.726974 sshd[4928]: Accepted publickey for core from 139.178.68.195 port 33564 ssh2: RSA SHA256:kZCGRB9AT+jVFxeaX4/tO2T0hB3bd3sNSBeK3Rz6bcg Feb 9 19:04:42.728948 sshd[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:42.731881 env[1647]: time="2024-02-09T19:04:42.731194785Z" level=error msg="StartContainer for \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:04:42.731992 kubelet[2662]: E0209 19:04:42.731786 2662 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030" Feb 9 19:04:42.734342 kubelet[2662]: E0209 19:04:42.734196 2662 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:04:42.734342 kubelet[2662]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:04:42.734342 kubelet[2662]: rm /hostbin/cilium-mount Feb 9 19:04:42.734342 kubelet[2662]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ncqsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-l9q9b_kube-system(2a46a6c7-00be-4f9d-b5b1-a16b0085c086): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:04:42.735052 kubelet[2662]: E0209 19:04:42.734799 2662 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-l9q9b" podUID=2a46a6c7-00be-4f9d-b5b1-a16b0085c086 Feb 9 19:04:42.738791 systemd[1]: Started session-31.scope. Feb 9 19:04:42.739283 systemd-logind[1634]: New session 31 of user core. Feb 9 19:04:42.924182 env[1647]: time="2024-02-09T19:04:42.924139843Z" level=info msg="StopPodSandbox for \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\"" Feb 9 19:04:42.924384 env[1647]: time="2024-02-09T19:04:42.924213534Z" level=info msg="Container to stop \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:04:42.933566 systemd[1]: cri-containerd-e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7.scope: Deactivated successfully. Feb 9 19:04:42.983040 env[1647]: time="2024-02-09T19:04:42.982122075Z" level=info msg="shim disconnected" id=e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7 Feb 9 19:04:42.983040 env[1647]: time="2024-02-09T19:04:42.982177881Z" level=warning msg="cleaning up after shim disconnected" id=e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7 namespace=k8s.io Feb 9 19:04:42.983040 env[1647]: time="2024-02-09T19:04:42.982190461Z" level=info msg="cleaning up dead shim" Feb 9 19:04:42.992791 env[1647]: time="2024-02-09T19:04:42.992735999Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5000 runtime=io.containerd.runc.v2\n" Feb 9 19:04:42.993108 env[1647]: time="2024-02-09T19:04:42.993074098Z" level=info msg="TearDown network for sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" successfully" Feb 9 19:04:42.993202 env[1647]: time="2024-02-09T19:04:42.993106704Z" level=info msg="StopPodSandbox for \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" returns successfully" Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186614 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hubble-tls\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186674 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-ipsec-secrets\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186704 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-lib-modules\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186731 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cni-path\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186761 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-etc-cni-netd\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187505 kubelet[2662]: I0209 19:04:43.186798 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-kernel\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186838 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncqsx\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-kube-api-access-ncqsx\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186869 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-config-path\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186900 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-clustermesh-secrets\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186927 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hostproc\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186954 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-xtables-lock\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.187929 kubelet[2662]: I0209 19:04:43.186981 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-cgroup\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.188285 kubelet[2662]: I0209 19:04:43.187010 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-net\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.188285 kubelet[2662]: I0209 19:04:43.187037 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-bpf-maps\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.188285 kubelet[2662]: I0209 19:04:43.187065 2662 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-run\") pod \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\" (UID: \"2a46a6c7-00be-4f9d-b5b1-a16b0085c086\") " Feb 9 19:04:43.188285 kubelet[2662]: I0209 19:04:43.187135 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.188285 kubelet[2662]: W0209 19:04:43.187592 2662 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2a46a6c7-00be-4f9d-b5b1-a16b0085c086/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:04:43.189884 kubelet[2662]: I0209 19:04:43.189845 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190013 kubelet[2662]: I0209 19:04:43.189912 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cni-path" (OuterVolumeSpecName: "cni-path") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190013 kubelet[2662]: I0209 19:04:43.189938 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190013 kubelet[2662]: I0209 19:04:43.189962 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190210 kubelet[2662]: I0209 19:04:43.190185 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190272 kubelet[2662]: I0209 19:04:43.190228 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hostproc" (OuterVolumeSpecName: "hostproc") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190272 kubelet[2662]: I0209 19:04:43.190257 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190395 kubelet[2662]: I0209 19:04:43.190283 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.190395 kubelet[2662]: I0209 19:04:43.190310 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:04:43.191528 kubelet[2662]: I0209 19:04:43.191485 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:04:43.193157 kubelet[2662]: I0209 19:04:43.193126 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:43.195115 kubelet[2662]: I0209 19:04:43.195081 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:43.198087 kubelet[2662]: I0209 19:04:43.198045 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:04:43.199106 kubelet[2662]: I0209 19:04:43.199072 2662 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-kube-api-access-ncqsx" (OuterVolumeSpecName: "kube-api-access-ncqsx") pod "2a46a6c7-00be-4f9d-b5b1-a16b0085c086" (UID: "2a46a6c7-00be-4f9d-b5b1-a16b0085c086"). InnerVolumeSpecName "kube-api-access-ncqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287474 2662 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-net\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287511 2662 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-bpf-maps\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287527 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-run\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287541 2662 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hubble-tls\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287559 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-ipsec-secrets\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287575 2662 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-lib-modules\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287588 2662 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cni-path\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.288431 kubelet[2662]: I0209 19:04:43.287601 2662 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-etc-cni-netd\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287616 2662 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-host-proc-sys-kernel\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287631 2662 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-ncqsx\" (UniqueName: \"kubernetes.io/projected/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-kube-api-access-ncqsx\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287645 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-config-path\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287657 2662 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-hostproc\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287671 2662 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-xtables-lock\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287684 2662 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-cilium-cgroup\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.289302 kubelet[2662]: I0209 19:04:43.287701 2662 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2a46a6c7-00be-4f9d-b5b1-a16b0085c086-clustermesh-secrets\") on node \"ip-172-31-31-36\" DevicePath \"\"" Feb 9 19:04:43.293843 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7-shm.mount: Deactivated successfully. Feb 9 19:04:43.293980 systemd[1]: var-lib-kubelet-pods-2a46a6c7\x2d00be\x2d4f9d\x2db5b1\x2da16b0085c086-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dncqsx.mount: Deactivated successfully. Feb 9 19:04:43.294309 systemd[1]: var-lib-kubelet-pods-2a46a6c7\x2d00be\x2d4f9d\x2db5b1\x2da16b0085c086-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:43.294426 systemd[1]: var-lib-kubelet-pods-2a46a6c7\x2d00be\x2d4f9d\x2db5b1\x2da16b0085c086-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:04:43.294504 systemd[1]: var-lib-kubelet-pods-2a46a6c7\x2d00be\x2d4f9d\x2db5b1\x2da16b0085c086-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:04:43.345195 systemd[1]: Removed slice kubepods-burstable-pod2a46a6c7_00be_4f9d_b5b1_a16b0085c086.slice. Feb 9 19:04:43.927462 kubelet[2662]: I0209 19:04:43.927427 2662 scope.go:115] "RemoveContainer" containerID="124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030" Feb 9 19:04:43.936037 env[1647]: time="2024-02-09T19:04:43.935469567Z" level=info msg="RemoveContainer for \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\"" Feb 9 19:04:43.946322 env[1647]: time="2024-02-09T19:04:43.946276395Z" level=info msg="RemoveContainer for \"124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030\" returns successfully" Feb 9 19:04:43.998773 kubelet[2662]: I0209 19:04:43.998727 2662 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:04:43.998966 kubelet[2662]: E0209 19:04:43.998802 2662 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2a46a6c7-00be-4f9d-b5b1-a16b0085c086" containerName="mount-cgroup" Feb 9 19:04:43.998966 kubelet[2662]: I0209 19:04:43.998835 2662 memory_manager.go:346] "RemoveStaleState removing state" podUID="2a46a6c7-00be-4f9d-b5b1-a16b0085c086" containerName="mount-cgroup" Feb 9 19:04:44.024384 systemd[1]: Created slice kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice. Feb 9 19:04:44.091970 kubelet[2662]: I0209 19:04:44.091919 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-bpf-maps\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.091970 kubelet[2662]: I0209 19:04:44.091974 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-etc-cni-netd\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092016 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv9ls\" (UniqueName: \"kubernetes.io/projected/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-kube-api-access-zv9ls\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092046 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-lib-modules\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092076 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-hubble-tls\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092112 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-cilium-cgroup\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092145 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-clustermesh-secrets\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092234 kubelet[2662]: I0209 19:04:44.092176 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-cilium-ipsec-secrets\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092212 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-host-proc-sys-kernel\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092250 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-cilium-run\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092290 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-cilium-config-path\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092328 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-host-proc-sys-net\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092418 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-xtables-lock\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092582 kubelet[2662]: I0209 19:04:44.092458 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-hostproc\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.092830 kubelet[2662]: I0209 19:04:44.092490 2662 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7951b8a8-5c91-40ed-b9c6-bcc06bd0c540-cni-path\") pod \"cilium-lpzkz\" (UID: \"7951b8a8-5c91-40ed-b9c6-bcc06bd0c540\") " pod="kube-system/cilium-lpzkz" Feb 9 19:04:44.330682 kubelet[2662]: E0209 19:04:44.330651 2662 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:04:44.338633 env[1647]: time="2024-02-09T19:04:44.338488465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpzkz,Uid:7951b8a8-5c91-40ed-b9c6-bcc06bd0c540,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:44.362126 env[1647]: time="2024-02-09T19:04:44.361911155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:44.362126 env[1647]: time="2024-02-09T19:04:44.362089119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:44.362335 env[1647]: time="2024-02-09T19:04:44.362107254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:44.362757 env[1647]: time="2024-02-09T19:04:44.362709724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3 pid=5029 runtime=io.containerd.runc.v2 Feb 9 19:04:44.391993 systemd[1]: run-containerd-runc-k8s.io-876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3-runc.9aDs2x.mount: Deactivated successfully. Feb 9 19:04:44.398033 systemd[1]: Started cri-containerd-876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3.scope. Feb 9 19:04:44.437984 env[1647]: time="2024-02-09T19:04:44.437932780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lpzkz,Uid:7951b8a8-5c91-40ed-b9c6-bcc06bd0c540,Namespace:kube-system,Attempt:0,} returns sandbox id \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\"" Feb 9 19:04:44.441580 env[1647]: time="2024-02-09T19:04:44.441532063Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:44.462857 env[1647]: time="2024-02-09T19:04:44.462802845Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad\"" Feb 9 19:04:44.465212 env[1647]: time="2024-02-09T19:04:44.463585527Z" level=info msg="StartContainer for \"3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad\"" Feb 9 19:04:44.483950 systemd[1]: Started cri-containerd-3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad.scope. Feb 9 19:04:44.519284 env[1647]: time="2024-02-09T19:04:44.519230021Z" level=info msg="StartContainer for \"3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad\" returns successfully" Feb 9 19:04:44.538140 systemd[1]: cri-containerd-3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad.scope: Deactivated successfully. Feb 9 19:04:44.689935 env[1647]: time="2024-02-09T19:04:44.689803048Z" level=info msg="shim disconnected" id=3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad Feb 9 19:04:44.690376 env[1647]: time="2024-02-09T19:04:44.690335754Z" level=warning msg="cleaning up after shim disconnected" id=3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad namespace=k8s.io Feb 9 19:04:44.692696 env[1647]: time="2024-02-09T19:04:44.692512583Z" level=info msg="cleaning up dead shim" Feb 9 19:04:44.721883 env[1647]: time="2024-02-09T19:04:44.721828275Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5113 runtime=io.containerd.runc.v2\n" Feb 9 19:04:44.941465 env[1647]: time="2024-02-09T19:04:44.940721605Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:04:44.986422 env[1647]: time="2024-02-09T19:04:44.986372629Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4\"" Feb 9 19:04:44.987975 env[1647]: time="2024-02-09T19:04:44.987877817Z" level=info msg="StartContainer for \"fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4\"" Feb 9 19:04:45.100512 systemd[1]: Started cri-containerd-fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4.scope. Feb 9 19:04:45.251623 env[1647]: time="2024-02-09T19:04:45.251567751Z" level=info msg="StartContainer for \"fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4\" returns successfully" Feb 9 19:04:45.281571 systemd[1]: cri-containerd-fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4.scope: Deactivated successfully. Feb 9 19:04:45.318021 env[1647]: time="2024-02-09T19:04:45.317959558Z" level=info msg="shim disconnected" id=fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4 Feb 9 19:04:45.318021 env[1647]: time="2024-02-09T19:04:45.318011172Z" level=warning msg="cleaning up after shim disconnected" id=fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4 namespace=k8s.io Feb 9 19:04:45.318021 env[1647]: time="2024-02-09T19:04:45.318024086Z" level=info msg="cleaning up dead shim" Feb 9 19:04:45.330016 env[1647]: time="2024-02-09T19:04:45.329945191Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5176 runtime=io.containerd.runc.v2\n" Feb 9 19:04:45.342693 kubelet[2662]: I0209 19:04:45.342665 2662 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2a46a6c7-00be-4f9d-b5b1-a16b0085c086 path="/var/lib/kubelet/pods/2a46a6c7-00be-4f9d-b5b1-a16b0085c086/volumes" Feb 9 19:04:45.829280 kubelet[2662]: W0209 19:04:45.829230 2662 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a46a6c7_00be_4f9d_b5b1_a16b0085c086.slice/cri-containerd-124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030.scope WatchSource:0}: container "124997f25ada15ae84fad9e4dde112878abdab7212657dbb384af81418f9a030" in namespace "k8s.io": not found Feb 9 19:04:45.948791 env[1647]: time="2024-02-09T19:04:45.948725478Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:04:45.973775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3538776085.mount: Deactivated successfully. Feb 9 19:04:45.984831 env[1647]: time="2024-02-09T19:04:45.984775101Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d\"" Feb 9 19:04:45.988090 env[1647]: time="2024-02-09T19:04:45.987938304Z" level=info msg="StartContainer for \"354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d\"" Feb 9 19:04:46.052949 systemd[1]: Started cri-containerd-354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d.scope. Feb 9 19:04:46.120500 systemd[1]: cri-containerd-354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d.scope: Deactivated successfully. Feb 9 19:04:46.125925 env[1647]: time="2024-02-09T19:04:46.125877469Z" level=info msg="StartContainer for \"354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d\" returns successfully" Feb 9 19:04:46.176861 env[1647]: time="2024-02-09T19:04:46.176740825Z" level=info msg="shim disconnected" id=354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d Feb 9 19:04:46.176861 env[1647]: time="2024-02-09T19:04:46.176859115Z" level=warning msg="cleaning up after shim disconnected" id=354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d namespace=k8s.io Feb 9 19:04:46.177375 env[1647]: time="2024-02-09T19:04:46.176874515Z" level=info msg="cleaning up dead shim" Feb 9 19:04:46.188108 env[1647]: time="2024-02-09T19:04:46.188048276Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5236 runtime=io.containerd.runc.v2\n" Feb 9 19:04:46.354501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d-rootfs.mount: Deactivated successfully. Feb 9 19:04:46.958849 env[1647]: time="2024-02-09T19:04:46.953568141Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:04:46.993537 env[1647]: time="2024-02-09T19:04:46.993477624Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c\"" Feb 9 19:04:46.994450 env[1647]: time="2024-02-09T19:04:46.994418012Z" level=info msg="StartContainer for \"234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c\"" Feb 9 19:04:47.070599 systemd[1]: Started cri-containerd-234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c.scope. Feb 9 19:04:47.120856 systemd[1]: cri-containerd-234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c.scope: Deactivated successfully. Feb 9 19:04:47.123976 env[1647]: time="2024-02-09T19:04:47.123650082Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice/cri-containerd-234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c.scope/memory.events\": no such file or directory" Feb 9 19:04:47.129774 env[1647]: time="2024-02-09T19:04:47.129714063Z" level=info msg="StartContainer for \"234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c\" returns successfully" Feb 9 19:04:47.268207 env[1647]: time="2024-02-09T19:04:47.268033957Z" level=info msg="shim disconnected" id=234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c Feb 9 19:04:47.268207 env[1647]: time="2024-02-09T19:04:47.268094831Z" level=warning msg="cleaning up after shim disconnected" id=234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c namespace=k8s.io Feb 9 19:04:47.268207 env[1647]: time="2024-02-09T19:04:47.268107734Z" level=info msg="cleaning up dead shim" Feb 9 19:04:47.279841 env[1647]: time="2024-02-09T19:04:47.279793488Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5294 runtime=io.containerd.runc.v2\n" Feb 9 19:04:47.355176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c-rootfs.mount: Deactivated successfully. Feb 9 19:04:47.962235 env[1647]: time="2024-02-09T19:04:47.962190061Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:04:47.988983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665414918.mount: Deactivated successfully. Feb 9 19:04:48.005543 env[1647]: time="2024-02-09T19:04:48.005325435Z" level=info msg="CreateContainer within sandbox \"876c67aaf0f222b6ae59729d175d45e547eb2bfbf1a407ba1cc89a61ccc43da3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5\"" Feb 9 19:04:48.008646 env[1647]: time="2024-02-09T19:04:48.006429085Z" level=info msg="StartContainer for \"cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5\"" Feb 9 19:04:48.042751 systemd[1]: Started cri-containerd-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5.scope. Feb 9 19:04:48.243195 env[1647]: time="2024-02-09T19:04:48.243087787Z" level=info msg="StartContainer for \"cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5\" returns successfully" Feb 9 19:04:48.944063 kubelet[2662]: W0209 19:04:48.944018 2662 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice/cri-containerd-3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad.scope WatchSource:0}: task 3a7add6994123e5f7db753bd630452bd91a82749381d7e897fcccbaae1ace5ad not found: not found Feb 9 19:04:48.983387 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:04:49.396559 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.eGBop5.mount: Deactivated successfully. Feb 9 19:04:51.663601 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.ebjtJa.mount: Deactivated successfully. Feb 9 19:04:52.066129 kubelet[2662]: W0209 19:04:52.066068 2662 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice/cri-containerd-fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4.scope WatchSource:0}: task fa9f35252c002226b1060867741652ef492d8a77d251f73492291af1bb4ca3c4 not found: not found Feb 9 19:04:52.661143 systemd-networkd[1455]: lxc_health: Link UP Feb 9 19:04:52.674933 (udev-worker)[5885]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:04:52.678727 systemd-networkd[1455]: lxc_health: Gained carrier Feb 9 19:04:52.679510 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:04:53.993532 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.Yr830n.mount: Deactivated successfully. Feb 9 19:04:54.121513 kubelet[2662]: E0209 19:04:54.120113 2662 upgradeaware.go:440] Error proxying data from backend to client: read tcp 127.0.0.1:53286->127.0.0.1:37171: read: connection reset by peer Feb 9 19:04:54.354233 systemd-networkd[1455]: lxc_health: Gained IPv6LL Feb 9 19:04:54.433088 kubelet[2662]: I0209 19:04:54.433052 2662 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lpzkz" podStartSLOduration=11.432922848 pod.CreationTimestamp="2024-02-09 19:04:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:48.991981448 +0000 UTC m=+160.079903194" watchObservedRunningTime="2024-02-09 19:04:54.432922848 +0000 UTC m=+165.520844594" Feb 9 19:04:55.182991 kubelet[2662]: W0209 19:04:55.182946 2662 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice/cri-containerd-354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d.scope WatchSource:0}: task 354464ea17e619898e3e4c1382fc00efdb83c7167041f982b2326eabcbf1604d not found: not found Feb 9 19:04:56.424796 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.SJ0giM.mount: Deactivated successfully. Feb 9 19:04:58.311383 kubelet[2662]: W0209 19:04:58.309475 2662 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7951b8a8_5c91_40ed_b9c6_bcc06bd0c540.slice/cri-containerd-234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c.scope WatchSource:0}: task 234c58db747a1aa9fe651dc6017837900014c451e51fbcba2a1587428f0d0e9c not found: not found Feb 9 19:04:58.705556 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.txF0Hd.mount: Deactivated successfully. Feb 9 19:05:01.007891 systemd[1]: run-containerd-runc-k8s.io-cf29feb3da12e92f6a7a5edf65d8e804bf619110fb814003d641dead27aaa6b5-runc.QBzqFb.mount: Deactivated successfully. Feb 9 19:05:01.258336 sshd[4928]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:01.280961 systemd[1]: sshd@30-172.31.31.36:22-139.178.68.195:33564.service: Deactivated successfully. Feb 9 19:05:01.282561 systemd[1]: session-31.scope: Deactivated successfully. Feb 9 19:05:01.285149 systemd-logind[1634]: Session 31 logged out. Waiting for processes to exit. Feb 9 19:05:01.294448 systemd-logind[1634]: Removed session 31. Feb 9 19:05:09.133404 env[1647]: time="2024-02-09T19:05:09.133342604Z" level=info msg="StopPodSandbox for \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\"" Feb 9 19:05:09.134127 env[1647]: time="2024-02-09T19:05:09.133535921Z" level=info msg="TearDown network for sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" successfully" Feb 9 19:05:09.134127 env[1647]: time="2024-02-09T19:05:09.133588806Z" level=info msg="StopPodSandbox for \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" returns successfully" Feb 9 19:05:09.134914 env[1647]: time="2024-02-09T19:05:09.134879317Z" level=info msg="RemovePodSandbox for \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\"" Feb 9 19:05:09.135038 env[1647]: time="2024-02-09T19:05:09.134916080Z" level=info msg="Forcibly stopping sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\"" Feb 9 19:05:09.135038 env[1647]: time="2024-02-09T19:05:09.135015541Z" level=info msg="TearDown network for sandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" successfully" Feb 9 19:05:09.142255 env[1647]: time="2024-02-09T19:05:09.142200501Z" level=info msg="RemovePodSandbox \"e9960d40bb2291e05986f8eff47600ce7cfeac986993e0c1ba49e6c7cf6764b7\" returns successfully" Feb 9 19:05:09.143035 env[1647]: time="2024-02-09T19:05:09.143000703Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:05:09.143297 env[1647]: time="2024-02-09T19:05:09.143247645Z" level=info msg="TearDown network for sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" successfully" Feb 9 19:05:09.143461 env[1647]: time="2024-02-09T19:05:09.143432245Z" level=info msg="StopPodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" returns successfully" Feb 9 19:05:09.144227 env[1647]: time="2024-02-09T19:05:09.144144096Z" level=info msg="RemovePodSandbox for \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:05:09.144315 env[1647]: time="2024-02-09T19:05:09.144222206Z" level=info msg="Forcibly stopping sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\"" Feb 9 19:05:09.144394 env[1647]: time="2024-02-09T19:05:09.144320606Z" level=info msg="TearDown network for sandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" successfully" Feb 9 19:05:09.149690 env[1647]: time="2024-02-09T19:05:09.149644538Z" level=info msg="RemovePodSandbox \"78b51809e3d2e4a147c89eac1a10801e81f7195d34c9c4b2b32e9e68676ba28a\" returns successfully" Feb 9 19:05:09.150253 env[1647]: time="2024-02-09T19:05:09.150223319Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:05:09.150398 env[1647]: time="2024-02-09T19:05:09.150314387Z" level=info msg="TearDown network for sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" successfully" Feb 9 19:05:09.150460 env[1647]: time="2024-02-09T19:05:09.150399907Z" level=info msg="StopPodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" returns successfully" Feb 9 19:05:09.150941 env[1647]: time="2024-02-09T19:05:09.150828162Z" level=info msg="RemovePodSandbox for \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:05:09.151029 env[1647]: time="2024-02-09T19:05:09.150948015Z" level=info msg="Forcibly stopping sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\"" Feb 9 19:05:09.151089 env[1647]: time="2024-02-09T19:05:09.151042652Z" level=info msg="TearDown network for sandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" successfully" Feb 9 19:05:09.156226 env[1647]: time="2024-02-09T19:05:09.156175331Z" level=info msg="RemovePodSandbox \"56c44e9f9b8f76f9f6e0c1414753147d3ace259e52085fa5340984947564a6aa\" returns successfully" Feb 9 19:05:15.742876 systemd[1]: cri-containerd-95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db.scope: Deactivated successfully. Feb 9 19:05:15.743417 systemd[1]: cri-containerd-95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db.scope: Consumed 4.729s CPU time. Feb 9 19:05:15.777013 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db-rootfs.mount: Deactivated successfully. Feb 9 19:05:15.817193 env[1647]: time="2024-02-09T19:05:15.817133124Z" level=info msg="shim disconnected" id=95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db Feb 9 19:05:15.817193 env[1647]: time="2024-02-09T19:05:15.817194331Z" level=warning msg="cleaning up after shim disconnected" id=95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db namespace=k8s.io Feb 9 19:05:15.819784 env[1647]: time="2024-02-09T19:05:15.817207120Z" level=info msg="cleaning up dead shim" Feb 9 19:05:15.834245 env[1647]: time="2024-02-09T19:05:15.834195099Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6024 runtime=io.containerd.runc.v2\n" Feb 9 19:05:16.039204 kubelet[2662]: I0209 19:05:16.037829 2662 scope.go:115] "RemoveContainer" containerID="95830507a5281595ae7390238a6cd77a4cc5200b5605af70c9c09e46116a10db" Feb 9 19:05:16.047908 env[1647]: time="2024-02-09T19:05:16.047865203Z" level=info msg="CreateContainer within sandbox \"985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:05:16.085312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3215355675.mount: Deactivated successfully. Feb 9 19:05:16.097314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2461669357.mount: Deactivated successfully. Feb 9 19:05:16.104638 env[1647]: time="2024-02-09T19:05:16.104442813Z" level=info msg="CreateContainer within sandbox \"985f2fdd1961610ea3c5a9f8e570e1985981134df20f5a7a52e8807c431d51fa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"f83cbfb742a39c91ccd5771f850739bf7555991a473da151925f054e0c21fed8\"" Feb 9 19:05:16.105943 env[1647]: time="2024-02-09T19:05:16.105895268Z" level=info msg="StartContainer for \"f83cbfb742a39c91ccd5771f850739bf7555991a473da151925f054e0c21fed8\"" Feb 9 19:05:16.159231 systemd[1]: Started cri-containerd-f83cbfb742a39c91ccd5771f850739bf7555991a473da151925f054e0c21fed8.scope. Feb 9 19:05:16.281389 env[1647]: time="2024-02-09T19:05:16.281164143Z" level=info msg="StartContainer for \"f83cbfb742a39c91ccd5771f850739bf7555991a473da151925f054e0c21fed8\" returns successfully" Feb 9 19:05:19.714699 systemd[1]: cri-containerd-f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087.scope: Deactivated successfully. Feb 9 19:05:19.715715 systemd[1]: cri-containerd-f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087.scope: Consumed 2.612s CPU time. Feb 9 19:05:19.785977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087-rootfs.mount: Deactivated successfully. Feb 9 19:05:19.819976 env[1647]: time="2024-02-09T19:05:19.819925855Z" level=info msg="shim disconnected" id=f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087 Feb 9 19:05:19.819976 env[1647]: time="2024-02-09T19:05:19.819977557Z" level=warning msg="cleaning up after shim disconnected" id=f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087 namespace=k8s.io Feb 9 19:05:19.821563 env[1647]: time="2024-02-09T19:05:19.819989021Z" level=info msg="cleaning up dead shim" Feb 9 19:05:19.848536 env[1647]: time="2024-02-09T19:05:19.848479788Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:05:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6085 runtime=io.containerd.runc.v2\n" Feb 9 19:05:20.054407 kubelet[2662]: I0209 19:05:20.053595 2662 scope.go:115] "RemoveContainer" containerID="f3f340bf2d713d49a1f4635fd81f382433bec02ec6f057683136f475fb677087" Feb 9 19:05:20.057646 env[1647]: time="2024-02-09T19:05:20.057601998Z" level=info msg="CreateContainer within sandbox \"532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:05:20.088504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3015545517.mount: Deactivated successfully. Feb 9 19:05:20.099503 env[1647]: time="2024-02-09T19:05:20.099449703Z" level=info msg="CreateContainer within sandbox \"532d6beae932767e8c06fab3f43f9cb460eeb787c5202b752d957f3c8d22d92c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6a4747fd3765372cfd11dd981941e7967d716b8d3f39df0878ebea00630b24c2\"" Feb 9 19:05:20.100331 env[1647]: time="2024-02-09T19:05:20.100288999Z" level=info msg="StartContainer for \"6a4747fd3765372cfd11dd981941e7967d716b8d3f39df0878ebea00630b24c2\"" Feb 9 19:05:20.142391 systemd[1]: Started cri-containerd-6a4747fd3765372cfd11dd981941e7967d716b8d3f39df0878ebea00630b24c2.scope. Feb 9 19:05:20.210728 env[1647]: time="2024-02-09T19:05:20.210671195Z" level=info msg="StartContainer for \"6a4747fd3765372cfd11dd981941e7967d716b8d3f39df0878ebea00630b24c2\" returns successfully" Feb 9 19:05:21.888185 kubelet[2662]: E0209 19:05:21.887454 2662 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:05:31.889180 kubelet[2662]: E0209 19:05:31.889053 2662 controller.go:189] failed to update lease, error: Put "https://172.31.31.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-36?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)