Apr 12 18:55:36.163215 kernel: Linux version 5.15.154-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Apr 12 17:19:00 -00 2024 Apr 12 18:55:36.163250 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:55:36.163738 kernel: BIOS-provided physical RAM map: Apr 12 18:55:36.163753 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 12 18:55:36.163882 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 12 18:55:36.163894 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 12 18:55:36.163911 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Apr 12 18:55:36.163923 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Apr 12 18:55:36.163935 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Apr 12 18:55:36.163947 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 12 18:55:36.164011 kernel: NX (Execute Disable) protection: active Apr 12 18:55:36.164026 kernel: SMBIOS 2.7 present. Apr 12 18:55:36.164038 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 12 18:55:36.164050 kernel: Hypervisor detected: KVM Apr 12 18:55:36.164068 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 12 18:55:36.164088 kernel: kvm-clock: cpu 0, msr 42191001, primary cpu clock Apr 12 18:55:36.164101 kernel: kvm-clock: using sched offset of 7749888196 cycles Apr 12 18:55:36.164115 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 12 18:55:36.164128 kernel: tsc: Detected 2499.996 MHz processor Apr 12 18:55:36.164141 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 12 18:55:36.164157 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 12 18:55:36.164170 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Apr 12 18:55:36.164182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 12 18:55:36.164195 kernel: Using GB pages for direct mapping Apr 12 18:55:36.164208 kernel: ACPI: Early table checksum verification disabled Apr 12 18:55:36.164248 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Apr 12 18:55:36.164261 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Apr 12 18:55:36.164275 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 12 18:55:36.164287 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 12 18:55:36.164303 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Apr 12 18:55:36.164316 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 12 18:55:36.164329 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 12 18:55:36.164342 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 12 18:55:36.164354 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 12 18:55:36.164365 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 12 18:55:36.164376 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 12 18:55:36.164386 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 12 18:55:36.164402 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Apr 12 18:55:36.164414 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Apr 12 18:55:36.164426 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Apr 12 18:55:36.164443 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Apr 12 18:55:36.164457 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Apr 12 18:55:36.164470 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Apr 12 18:55:36.164484 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Apr 12 18:55:36.164501 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Apr 12 18:55:36.164515 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Apr 12 18:55:36.164528 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Apr 12 18:55:36.164579 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 12 18:55:36.164593 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 12 18:55:36.164643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 12 18:55:36.164657 kernel: NUMA: Initialized distance table, cnt=1 Apr 12 18:55:36.164670 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Apr 12 18:55:36.164687 kernel: Zone ranges: Apr 12 18:55:36.164701 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 12 18:55:36.164715 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Apr 12 18:55:36.164768 kernel: Normal empty Apr 12 18:55:36.164782 kernel: Movable zone start for each node Apr 12 18:55:36.164796 kernel: Early memory node ranges Apr 12 18:55:36.164831 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 12 18:55:36.164844 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Apr 12 18:55:36.164859 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Apr 12 18:55:36.164876 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 12 18:55:36.164890 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 12 18:55:36.164904 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Apr 12 18:55:36.164918 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 12 18:55:36.164932 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 12 18:55:36.164946 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 12 18:55:36.164960 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 12 18:55:36.164974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 12 18:55:36.164988 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 12 18:55:36.165005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 12 18:55:36.165018 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 12 18:55:36.165033 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 12 18:55:36.165149 kernel: TSC deadline timer available Apr 12 18:55:36.165166 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 12 18:55:36.165180 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Apr 12 18:55:36.165194 kernel: Booting paravirtualized kernel on KVM Apr 12 18:55:36.165208 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 12 18:55:36.165223 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Apr 12 18:55:36.165240 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Apr 12 18:55:36.165254 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Apr 12 18:55:36.165268 kernel: pcpu-alloc: [0] 0 1 Apr 12 18:55:36.165281 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Apr 12 18:55:36.165295 kernel: kvm-guest: PV spinlocks enabled Apr 12 18:55:36.165309 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 12 18:55:36.165323 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Apr 12 18:55:36.165337 kernel: Policy zone: DMA32 Apr 12 18:55:36.165354 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:55:36.165371 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 12 18:55:36.165384 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 12 18:55:36.165399 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 12 18:55:36.165413 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 12 18:55:36.165427 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13708K rodata, 47440K init, 4148K bss, 123080K reserved, 0K cma-reserved) Apr 12 18:55:36.165441 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 12 18:55:36.165455 kernel: Kernel/User page tables isolation: enabled Apr 12 18:55:36.165469 kernel: ftrace: allocating 34508 entries in 135 pages Apr 12 18:55:36.165485 kernel: ftrace: allocated 135 pages with 4 groups Apr 12 18:55:36.165499 kernel: rcu: Hierarchical RCU implementation. Apr 12 18:55:36.165511 kernel: rcu: RCU event tracing is enabled. Apr 12 18:55:36.165525 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 12 18:55:36.165538 kernel: Rude variant of Tasks RCU enabled. Apr 12 18:55:36.165551 kernel: Tracing variant of Tasks RCU enabled. Apr 12 18:55:36.165565 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 12 18:55:36.165579 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 12 18:55:36.165594 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 12 18:55:36.165610 kernel: random: crng init done Apr 12 18:55:36.165624 kernel: Console: colour VGA+ 80x25 Apr 12 18:55:36.165638 kernel: printk: console [ttyS0] enabled Apr 12 18:55:36.165652 kernel: ACPI: Core revision 20210730 Apr 12 18:55:36.165666 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 12 18:55:36.165680 kernel: APIC: Switch to symmetric I/O mode setup Apr 12 18:55:36.165694 kernel: x2apic enabled Apr 12 18:55:36.165708 kernel: Switched APIC routing to physical x2apic. Apr 12 18:55:36.165722 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 12 18:55:36.165739 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 12 18:55:36.165753 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 12 18:55:36.165767 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Apr 12 18:55:36.165782 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 12 18:55:36.165861 kernel: Spectre V2 : Mitigation: Retpolines Apr 12 18:55:36.165879 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 12 18:55:36.165894 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 12 18:55:36.165908 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 12 18:55:36.165923 kernel: RETBleed: Vulnerable Apr 12 18:55:36.165938 kernel: Speculative Store Bypass: Vulnerable Apr 12 18:55:36.165953 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 12 18:55:36.165967 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 12 18:55:36.165982 kernel: GDS: Unknown: Dependent on hypervisor status Apr 12 18:55:36.165996 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 12 18:55:36.166014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 12 18:55:36.166028 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 12 18:55:36.166042 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 12 18:55:36.166057 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 12 18:55:36.166072 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 12 18:55:36.166090 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 12 18:55:36.166104 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 12 18:55:36.166119 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 12 18:55:36.166134 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 12 18:55:36.166148 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 12 18:55:36.166163 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 12 18:55:36.166178 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 12 18:55:36.166193 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 12 18:55:36.166208 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 12 18:55:36.166222 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 12 18:55:36.166235 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 12 18:55:36.166249 kernel: Freeing SMP alternatives memory: 32K Apr 12 18:55:36.166266 kernel: pid_max: default: 32768 minimum: 301 Apr 12 18:55:36.166281 kernel: LSM: Security Framework initializing Apr 12 18:55:36.166295 kernel: SELinux: Initializing. Apr 12 18:55:36.166311 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 12 18:55:36.166325 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 12 18:55:36.166341 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 12 18:55:36.166356 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 12 18:55:36.166370 kernel: signal: max sigframe size: 3632 Apr 12 18:55:36.166386 kernel: rcu: Hierarchical SRCU implementation. Apr 12 18:55:36.166401 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 12 18:55:36.166418 kernel: smp: Bringing up secondary CPUs ... Apr 12 18:55:36.166434 kernel: x86: Booting SMP configuration: Apr 12 18:55:36.166449 kernel: .... node #0, CPUs: #1 Apr 12 18:55:36.166463 kernel: kvm-clock: cpu 1, msr 42191041, secondary cpu clock Apr 12 18:55:36.166478 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Apr 12 18:55:36.166493 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 12 18:55:36.166509 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 12 18:55:36.166525 kernel: smp: Brought up 1 node, 2 CPUs Apr 12 18:55:36.166540 kernel: smpboot: Max logical packages: 1 Apr 12 18:55:36.166558 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 12 18:55:36.166572 kernel: devtmpfs: initialized Apr 12 18:55:36.166587 kernel: x86/mm: Memory block size: 128MB Apr 12 18:55:36.166603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 12 18:55:36.166618 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 12 18:55:36.166633 kernel: pinctrl core: initialized pinctrl subsystem Apr 12 18:55:36.166648 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 12 18:55:36.166663 kernel: audit: initializing netlink subsys (disabled) Apr 12 18:55:36.166678 kernel: audit: type=2000 audit(1712948135.222:1): state=initialized audit_enabled=0 res=1 Apr 12 18:55:36.166694 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 12 18:55:36.166709 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 12 18:55:36.166724 kernel: cpuidle: using governor menu Apr 12 18:55:36.166739 kernel: ACPI: bus type PCI registered Apr 12 18:55:36.166754 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 12 18:55:36.166769 kernel: dca service started, version 1.12.1 Apr 12 18:55:36.166784 kernel: PCI: Using configuration type 1 for base access Apr 12 18:55:36.166799 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 12 18:55:36.166833 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Apr 12 18:55:36.166849 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Apr 12 18:55:36.166863 kernel: ACPI: Added _OSI(Module Device) Apr 12 18:55:36.166876 kernel: ACPI: Added _OSI(Processor Device) Apr 12 18:55:36.166997 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 12 18:55:36.167012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 12 18:55:36.167027 kernel: ACPI: Added _OSI(Linux-Dell-Video) Apr 12 18:55:36.167041 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Apr 12 18:55:36.167055 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Apr 12 18:55:36.167066 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 12 18:55:36.167084 kernel: ACPI: Interpreter enabled Apr 12 18:55:36.167098 kernel: ACPI: PM: (supports S0 S5) Apr 12 18:55:36.167112 kernel: ACPI: Using IOAPIC for interrupt routing Apr 12 18:55:36.167127 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 12 18:55:36.167141 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 12 18:55:36.167156 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 12 18:55:36.167395 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 12 18:55:36.167530 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Apr 12 18:55:36.167553 kernel: acpiphp: Slot [3] registered Apr 12 18:55:36.167569 kernel: acpiphp: Slot [4] registered Apr 12 18:55:36.167584 kernel: acpiphp: Slot [5] registered Apr 12 18:55:36.167599 kernel: acpiphp: Slot [6] registered Apr 12 18:55:36.167614 kernel: acpiphp: Slot [7] registered Apr 12 18:55:36.167628 kernel: acpiphp: Slot [8] registered Apr 12 18:55:36.167643 kernel: acpiphp: Slot [9] registered Apr 12 18:55:36.167657 kernel: acpiphp: Slot [10] registered Apr 12 18:55:36.167672 kernel: acpiphp: Slot [11] registered Apr 12 18:55:36.167689 kernel: acpiphp: Slot [12] registered Apr 12 18:55:36.167703 kernel: acpiphp: Slot [13] registered Apr 12 18:55:36.167718 kernel: acpiphp: Slot [14] registered Apr 12 18:55:36.167732 kernel: acpiphp: Slot [15] registered Apr 12 18:55:36.167747 kernel: acpiphp: Slot [16] registered Apr 12 18:55:36.167762 kernel: acpiphp: Slot [17] registered Apr 12 18:55:36.167776 kernel: acpiphp: Slot [18] registered Apr 12 18:55:36.167790 kernel: acpiphp: Slot [19] registered Apr 12 18:55:36.167805 kernel: acpiphp: Slot [20] registered Apr 12 18:55:36.167851 kernel: acpiphp: Slot [21] registered Apr 12 18:55:36.167862 kernel: acpiphp: Slot [22] registered Apr 12 18:55:36.167873 kernel: acpiphp: Slot [23] registered Apr 12 18:55:36.167884 kernel: acpiphp: Slot [24] registered Apr 12 18:55:36.167896 kernel: acpiphp: Slot [25] registered Apr 12 18:55:36.167908 kernel: acpiphp: Slot [26] registered Apr 12 18:55:36.167919 kernel: acpiphp: Slot [27] registered Apr 12 18:55:36.167931 kernel: acpiphp: Slot [28] registered Apr 12 18:55:36.167944 kernel: acpiphp: Slot [29] registered Apr 12 18:55:36.168154 kernel: acpiphp: Slot [30] registered Apr 12 18:55:36.168178 kernel: acpiphp: Slot [31] registered Apr 12 18:55:36.168190 kernel: PCI host bridge to bus 0000:00 Apr 12 18:55:36.168340 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 12 18:55:36.168451 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 12 18:55:36.177392 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 12 18:55:36.177601 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 12 18:55:36.177765 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 12 18:55:36.177998 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 12 18:55:36.178144 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 12 18:55:36.178283 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 12 18:55:36.178416 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 12 18:55:36.179130 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Apr 12 18:55:36.179341 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 12 18:55:36.179475 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 12 18:55:36.179673 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 12 18:55:36.184747 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 12 18:55:36.185094 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 12 18:55:36.185408 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 12 18:55:36.185778 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 12 18:55:36.185941 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Apr 12 18:55:36.186133 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Apr 12 18:55:36.186290 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 12 18:55:36.186492 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 12 18:55:36.186803 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Apr 12 18:55:36.186978 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 12 18:55:36.187119 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Apr 12 18:55:36.187140 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 12 18:55:36.187162 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 12 18:55:36.187178 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 12 18:55:36.187193 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 12 18:55:36.187208 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 12 18:55:36.187223 kernel: iommu: Default domain type: Translated Apr 12 18:55:36.187238 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 12 18:55:36.187369 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 12 18:55:36.187504 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 12 18:55:36.187641 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 12 18:55:36.187663 kernel: vgaarb: loaded Apr 12 18:55:36.187725 kernel: pps_core: LinuxPPS API ver. 1 registered Apr 12 18:55:36.187742 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Apr 12 18:55:36.187757 kernel: PTP clock support registered Apr 12 18:55:36.187772 kernel: PCI: Using ACPI for IRQ routing Apr 12 18:55:36.187787 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 12 18:55:36.187802 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 12 18:55:36.187830 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Apr 12 18:55:36.187849 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 12 18:55:36.187864 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 12 18:55:36.187879 kernel: clocksource: Switched to clocksource kvm-clock Apr 12 18:55:36.187894 kernel: VFS: Disk quotas dquot_6.6.0 Apr 12 18:55:36.187909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 12 18:55:36.187923 kernel: pnp: PnP ACPI init Apr 12 18:55:36.187938 kernel: pnp: PnP ACPI: found 5 devices Apr 12 18:55:36.187992 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 12 18:55:36.188012 kernel: NET: Registered PF_INET protocol family Apr 12 18:55:36.188032 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 12 18:55:36.188047 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 12 18:55:36.188062 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 12 18:55:36.188084 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 12 18:55:36.188099 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Apr 12 18:55:36.188115 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 12 18:55:36.188130 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 12 18:55:36.188144 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 12 18:55:36.188160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 12 18:55:36.188177 kernel: NET: Registered PF_XDP protocol family Apr 12 18:55:36.188319 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 12 18:55:36.188673 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 12 18:55:36.188950 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 12 18:55:36.189258 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 12 18:55:36.189405 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 12 18:55:36.189549 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Apr 12 18:55:36.189575 kernel: PCI: CLS 0 bytes, default 64 Apr 12 18:55:36.189592 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 12 18:55:36.189678 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 12 18:55:36.189694 kernel: clocksource: Switched to clocksource tsc Apr 12 18:55:36.189709 kernel: Initialise system trusted keyrings Apr 12 18:55:36.189724 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 12 18:55:36.189738 kernel: Key type asymmetric registered Apr 12 18:55:36.189752 kernel: Asymmetric key parser 'x509' registered Apr 12 18:55:36.189767 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Apr 12 18:55:36.189787 kernel: io scheduler mq-deadline registered Apr 12 18:55:36.189802 kernel: io scheduler kyber registered Apr 12 18:55:36.189872 kernel: io scheduler bfq registered Apr 12 18:55:36.189889 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 12 18:55:36.189904 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 12 18:55:36.189919 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 12 18:55:36.189934 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 12 18:55:36.189949 kernel: i8042: Warning: Keylock active Apr 12 18:55:36.189965 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 12 18:55:36.189983 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 12 18:55:36.190141 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 12 18:55:36.190273 kernel: rtc_cmos 00:00: registered as rtc0 Apr 12 18:55:36.190402 kernel: rtc_cmos 00:00: setting system clock to 2024-04-12T18:55:35 UTC (1712948135) Apr 12 18:55:36.190718 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 12 18:55:36.190741 kernel: intel_pstate: CPU model not supported Apr 12 18:55:36.190757 kernel: NET: Registered PF_INET6 protocol family Apr 12 18:55:36.190772 kernel: Segment Routing with IPv6 Apr 12 18:55:36.190792 kernel: In-situ OAM (IOAM) with IPv6 Apr 12 18:55:36.190807 kernel: NET: Registered PF_PACKET protocol family Apr 12 18:55:36.190834 kernel: Key type dns_resolver registered Apr 12 18:55:36.190849 kernel: IPI shorthand broadcast: enabled Apr 12 18:55:36.190865 kernel: sched_clock: Marking stable (496407989, 338513185)->(993304667, -158383493) Apr 12 18:55:36.190879 kernel: registered taskstats version 1 Apr 12 18:55:36.190894 kernel: Loading compiled-in X.509 certificates Apr 12 18:55:36.191028 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.154-flatcar: 1fa140a38fc6bd27c8b56127e4d1eb4f665c7ec4' Apr 12 18:55:36.191046 kernel: Key type .fscrypt registered Apr 12 18:55:36.191065 kernel: Key type fscrypt-provisioning registered Apr 12 18:55:36.191081 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 12 18:55:36.191096 kernel: ima: Allocated hash algorithm: sha1 Apr 12 18:55:36.191111 kernel: ima: No architecture policies found Apr 12 18:55:36.191127 kernel: Freeing unused kernel image (initmem) memory: 47440K Apr 12 18:55:36.191142 kernel: Write protecting the kernel read-only data: 28672k Apr 12 18:55:36.191158 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Apr 12 18:55:36.191173 kernel: Freeing unused kernel image (rodata/data gap) memory: 628K Apr 12 18:55:36.191188 kernel: Run /init as init process Apr 12 18:55:36.191206 kernel: with arguments: Apr 12 18:55:36.191221 kernel: /init Apr 12 18:55:36.191235 kernel: with environment: Apr 12 18:55:36.191249 kernel: HOME=/ Apr 12 18:55:36.191263 kernel: TERM=linux Apr 12 18:55:36.191278 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 12 18:55:36.191297 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:55:36.191319 systemd[1]: Detected virtualization amazon. Apr 12 18:55:36.191335 systemd[1]: Detected architecture x86-64. Apr 12 18:55:36.191351 systemd[1]: Running in initrd. Apr 12 18:55:36.191366 systemd[1]: No hostname configured, using default hostname. Apr 12 18:55:36.191382 systemd[1]: Hostname set to . Apr 12 18:55:36.191416 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:55:36.191435 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 12 18:55:36.191452 systemd[1]: Queued start job for default target initrd.target. Apr 12 18:55:36.191468 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:55:36.191639 systemd[1]: Reached target cryptsetup.target. Apr 12 18:55:36.191694 systemd[1]: Reached target paths.target. Apr 12 18:55:36.191709 systemd[1]: Reached target slices.target. Apr 12 18:55:36.191837 systemd[1]: Reached target swap.target. Apr 12 18:55:36.191856 systemd[1]: Reached target timers.target. Apr 12 18:55:36.191878 systemd[1]: Listening on iscsid.socket. Apr 12 18:55:36.191895 systemd[1]: Listening on iscsiuio.socket. Apr 12 18:55:36.191912 systemd[1]: Listening on systemd-journald-audit.socket. Apr 12 18:55:36.192089 systemd[1]: Listening on systemd-journald-dev-log.socket. Apr 12 18:55:36.192112 systemd[1]: Listening on systemd-journald.socket. Apr 12 18:55:36.192128 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:55:36.192276 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:55:36.192295 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:55:36.192311 systemd[1]: Reached target sockets.target. Apr 12 18:55:36.192331 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:55:36.192348 systemd[1]: Finished network-cleanup.service. Apr 12 18:55:36.192364 systemd[1]: Starting systemd-fsck-usr.service... Apr 12 18:55:36.192380 systemd[1]: Starting systemd-journald.service... Apr 12 18:55:36.192397 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:55:36.192413 systemd[1]: Starting systemd-resolved.service... Apr 12 18:55:36.192436 systemd-journald[185]: Journal started Apr 12 18:55:36.192522 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2a9b693a20ef6ecdec85f44857ad42) is 4.8M, max 38.7M, 33.9M free. Apr 12 18:55:36.201842 systemd[1]: Starting systemd-vconsole-setup.service... Apr 12 18:55:36.201908 systemd[1]: Started systemd-journald.service. Apr 12 18:55:36.206630 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:55:36.257051 kernel: audit: type=1130 audit(1712948136.204:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.257089 kernel: audit: type=1130 audit(1712948136.206:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.257109 kernel: audit: type=1130 audit(1712948136.208:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.208596 systemd[1]: Finished systemd-fsck-usr.service. Apr 12 18:55:36.461306 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 12 18:55:36.461352 kernel: Bridge firewalling registered Apr 12 18:55:36.461379 kernel: SCSI subsystem initialized Apr 12 18:55:36.461398 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 12 18:55:36.461414 kernel: device-mapper: uevent: version 1.0.3 Apr 12 18:55:36.461430 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Apr 12 18:55:36.254484 systemd-modules-load[186]: Inserted module 'overlay' Apr 12 18:55:36.264500 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:55:36.277776 systemd-resolved[187]: Positive Trust Anchors: Apr 12 18:55:36.277795 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:55:36.279899 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:55:36.486769 kernel: audit: type=1130 audit(1712948136.464:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.294617 systemd-resolved[187]: Defaulting to hostname 'linux'. Apr 12 18:55:36.315583 systemd-modules-load[186]: Inserted module 'br_netfilter' Apr 12 18:55:36.369949 systemd-modules-load[186]: Inserted module 'dm_multipath' Apr 12 18:55:36.500555 kernel: audit: type=1130 audit(1712948136.492:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.460372 systemd[1]: Started systemd-resolved.service. Apr 12 18:55:36.486472 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:55:36.501666 systemd[1]: Finished systemd-vconsole-setup.service. Apr 12 18:55:36.504214 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:55:36.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.506496 systemd[1]: Reached target nss-lookup.target. Apr 12 18:55:36.519075 kernel: audit: type=1130 audit(1712948136.502:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.519102 kernel: audit: type=1130 audit(1712948136.505:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.519259 systemd[1]: Starting dracut-cmdline-ask.service... Apr 12 18:55:36.522498 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:55:36.534326 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:55:36.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.542824 kernel: audit: type=1130 audit(1712948136.532:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.544849 systemd[1]: Finished dracut-cmdline-ask.service. Apr 12 18:55:36.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.555767 kernel: audit: type=1130 audit(1712948136.545:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.546920 systemd[1]: Starting dracut-cmdline.service... Apr 12 18:55:36.565175 dracut-cmdline[207]: dracut-dracut-053 Apr 12 18:55:36.568286 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=189121f7c8c0a24098d3bb1e040d34611f7c276be43815ff7fe409fce185edaf Apr 12 18:55:36.714854 kernel: Loading iSCSI transport class v2.0-870. Apr 12 18:55:36.751841 kernel: iscsi: registered transport (tcp) Apr 12 18:55:36.785263 kernel: iscsi: registered transport (qla4xxx) Apr 12 18:55:36.785340 kernel: QLogic iSCSI HBA Driver Apr 12 18:55:36.830764 systemd[1]: Finished dracut-cmdline.service. Apr 12 18:55:36.832406 systemd[1]: Starting dracut-pre-udev.service... Apr 12 18:55:36.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:36.896869 kernel: raid6: avx512x4 gen() 14145 MB/s Apr 12 18:55:36.914864 kernel: raid6: avx512x4 xor() 6241 MB/s Apr 12 18:55:36.932869 kernel: raid6: avx512x2 gen() 13929 MB/s Apr 12 18:55:36.953195 kernel: raid6: avx512x2 xor() 17096 MB/s Apr 12 18:55:36.972101 kernel: raid6: avx512x1 gen() 11329 MB/s Apr 12 18:55:36.989868 kernel: raid6: avx512x1 xor() 14097 MB/s Apr 12 18:55:37.007867 kernel: raid6: avx2x4 gen() 12398 MB/s Apr 12 18:55:37.025938 kernel: raid6: avx2x4 xor() 5706 MB/s Apr 12 18:55:37.043883 kernel: raid6: avx2x2 gen() 12555 MB/s Apr 12 18:55:37.061865 kernel: raid6: avx2x2 xor() 15301 MB/s Apr 12 18:55:37.079875 kernel: raid6: avx2x1 gen() 11680 MB/s Apr 12 18:55:37.097866 kernel: raid6: avx2x1 xor() 13319 MB/s Apr 12 18:55:37.115852 kernel: raid6: sse2x4 gen() 8408 MB/s Apr 12 18:55:37.134870 kernel: raid6: sse2x4 xor() 1769 MB/s Apr 12 18:55:37.153055 kernel: raid6: sse2x2 gen() 7970 MB/s Apr 12 18:55:37.169866 kernel: raid6: sse2x2 xor() 4253 MB/s Apr 12 18:55:37.187863 kernel: raid6: sse2x1 gen() 7593 MB/s Apr 12 18:55:37.206474 kernel: raid6: sse2x1 xor() 3462 MB/s Apr 12 18:55:37.206538 kernel: raid6: using algorithm avx512x4 gen() 14145 MB/s Apr 12 18:55:37.206551 kernel: raid6: .... xor() 6241 MB/s, rmw enabled Apr 12 18:55:37.208433 kernel: raid6: using avx512x2 recovery algorithm Apr 12 18:55:37.238851 kernel: xor: automatically using best checksumming function avx Apr 12 18:55:37.391840 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Apr 12 18:55:37.405034 systemd[1]: Finished dracut-pre-udev.service. Apr 12 18:55:37.409006 systemd[1]: Starting systemd-udevd.service... Apr 12 18:55:37.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:37.406000 audit: BPF prog-id=7 op=LOAD Apr 12 18:55:37.406000 audit: BPF prog-id=8 op=LOAD Apr 12 18:55:37.430660 systemd-udevd[384]: Using default interface naming scheme 'v252'. Apr 12 18:55:37.439796 systemd[1]: Started systemd-udevd.service. Apr 12 18:55:37.443419 systemd[1]: Starting dracut-pre-trigger.service... Apr 12 18:55:37.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:37.479836 dracut-pre-trigger[388]: rd.md=0: removing MD RAID activation Apr 12 18:55:37.532249 systemd[1]: Finished dracut-pre-trigger.service. Apr 12 18:55:37.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:37.535441 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:55:37.616657 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:55:37.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:37.710143 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 12 18:55:37.710425 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 12 18:55:37.729864 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 12 18:55:37.730092 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:8e:ef:6f:24:61 Apr 12 18:55:37.740759 (udev-worker)[432]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:55:37.744434 kernel: cryptd: max_cpu_qlen set to 1000 Apr 12 18:55:37.776240 kernel: AVX2 version of gcm_enc/dec engaged. Apr 12 18:55:37.776302 kernel: AES CTR mode by8 optimization enabled Apr 12 18:55:37.783160 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 12 18:55:37.783564 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 12 18:55:37.797833 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 12 18:55:37.801795 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 12 18:55:37.801866 kernel: GPT:9289727 != 16777215 Apr 12 18:55:37.801885 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 12 18:55:37.801904 kernel: GPT:9289727 != 16777215 Apr 12 18:55:37.801920 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:55:37.801937 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:55:37.895833 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (444) Apr 12 18:55:37.922165 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Apr 12 18:55:38.052194 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:55:38.080711 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Apr 12 18:55:38.088601 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Apr 12 18:55:38.088807 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Apr 12 18:55:38.094722 systemd[1]: Starting disk-uuid.service... Apr 12 18:55:38.106898 disk-uuid[594]: Primary Header is updated. Apr 12 18:55:38.106898 disk-uuid[594]: Secondary Entries is updated. Apr 12 18:55:38.106898 disk-uuid[594]: Secondary Header is updated. Apr 12 18:55:38.118049 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:55:38.126620 kernel: GPT:disk_guids don't match. Apr 12 18:55:38.126695 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 12 18:55:38.126713 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:55:38.135837 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:55:39.134832 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 12 18:55:39.135106 disk-uuid[595]: The operation has completed successfully. Apr 12 18:55:39.308850 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 12 18:55:39.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.308969 systemd[1]: Finished disk-uuid.service. Apr 12 18:55:39.325673 systemd[1]: Starting verity-setup.service... Apr 12 18:55:39.360038 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 12 18:55:39.459138 systemd[1]: Found device dev-mapper-usr.device. Apr 12 18:55:39.461451 systemd[1]: Mounting sysusr-usr.mount... Apr 12 18:55:39.464495 systemd[1]: Finished verity-setup.service. Apr 12 18:55:39.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.587844 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Apr 12 18:55:39.588265 systemd[1]: Mounted sysusr-usr.mount. Apr 12 18:55:39.590012 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Apr 12 18:55:39.593564 systemd[1]: Starting ignition-setup.service... Apr 12 18:55:39.596319 systemd[1]: Starting parse-ip-for-networkd.service... Apr 12 18:55:39.617516 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:55:39.617577 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:55:39.617597 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:55:39.636880 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:55:39.652891 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 12 18:55:39.676947 systemd[1]: Finished ignition-setup.service. Apr 12 18:55:39.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.679040 systemd[1]: Starting ignition-fetch-offline.service... Apr 12 18:55:39.708690 systemd[1]: Finished parse-ip-for-networkd.service. Apr 12 18:55:39.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.710000 audit: BPF prog-id=9 op=LOAD Apr 12 18:55:39.712090 systemd[1]: Starting systemd-networkd.service... Apr 12 18:55:39.743207 systemd-networkd[1106]: lo: Link UP Apr 12 18:55:39.743221 systemd-networkd[1106]: lo: Gained carrier Apr 12 18:55:39.743979 systemd-networkd[1106]: Enumeration completed Apr 12 18:55:39.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.744226 systemd[1]: Started systemd-networkd.service. Apr 12 18:55:39.744756 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:55:39.748161 systemd[1]: Reached target network.target. Apr 12 18:55:39.751356 systemd[1]: Starting iscsiuio.service... Apr 12 18:55:39.762336 systemd-networkd[1106]: eth0: Link UP Apr 12 18:55:39.762979 systemd[1]: Started iscsiuio.service. Apr 12 18:55:39.763020 systemd-networkd[1106]: eth0: Gained carrier Apr 12 18:55:39.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.774655 systemd[1]: Starting iscsid.service... Apr 12 18:55:39.780584 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:55:39.780584 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Apr 12 18:55:39.780584 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Apr 12 18:55:39.780584 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Apr 12 18:55:39.780584 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Apr 12 18:55:39.797668 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Apr 12 18:55:39.798712 systemd[1]: Started iscsid.service. Apr 12 18:55:39.802527 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.18.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 12 18:55:39.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.804006 systemd[1]: Starting dracut-initqueue.service... Apr 12 18:55:39.823384 systemd[1]: Finished dracut-initqueue.service. Apr 12 18:55:39.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:39.823637 systemd[1]: Reached target remote-fs-pre.target. Apr 12 18:55:39.829238 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:55:39.831084 systemd[1]: Reached target remote-fs.target. Apr 12 18:55:39.854001 systemd[1]: Starting dracut-pre-mount.service... Apr 12 18:55:39.880678 systemd[1]: Finished dracut-pre-mount.service. Apr 12 18:55:39.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.363243 ignition[1078]: Ignition 2.14.0 Apr 12 18:55:40.363259 ignition[1078]: Stage: fetch-offline Apr 12 18:55:40.363400 ignition[1078]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:40.363442 ignition[1078]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:40.413433 ignition[1078]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:40.415853 ignition[1078]: Ignition finished successfully Apr 12 18:55:40.418244 systemd[1]: Finished ignition-fetch-offline.service. Apr 12 18:55:40.442303 kernel: kauditd_printk_skb: 18 callbacks suppressed Apr 12 18:55:40.442434 kernel: audit: type=1130 audit(1712948140.416:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.425065 systemd[1]: Starting ignition-fetch.service... Apr 12 18:55:40.447068 ignition[1130]: Ignition 2.14.0 Apr 12 18:55:40.447081 ignition[1130]: Stage: fetch Apr 12 18:55:40.447372 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:40.447402 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:40.462932 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:40.465448 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:40.474120 ignition[1130]: INFO : PUT result: OK Apr 12 18:55:40.479674 ignition[1130]: DEBUG : parsed url from cmdline: "" Apr 12 18:55:40.479674 ignition[1130]: INFO : no config URL provided Apr 12 18:55:40.479674 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Apr 12 18:55:40.479674 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Apr 12 18:55:40.489057 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:40.489057 ignition[1130]: INFO : PUT result: OK Apr 12 18:55:40.489057 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 12 18:55:40.494029 ignition[1130]: INFO : GET result: OK Apr 12 18:55:40.495414 ignition[1130]: DEBUG : parsing config with SHA512: 3df9bac4e9bdd0cc03e7803c6f636b1fb0583d054dfbd8ea19fc9817222287043962622a8384332eee6c41737345d9e6e10243deaad56f20b62156bbee1c810c Apr 12 18:55:40.538567 unknown[1130]: fetched base config from "system" Apr 12 18:55:40.538824 unknown[1130]: fetched base config from "system" Apr 12 18:55:40.538832 unknown[1130]: fetched user config from "aws" Apr 12 18:55:40.541181 ignition[1130]: fetch: fetch complete Apr 12 18:55:40.541187 ignition[1130]: fetch: fetch passed Apr 12 18:55:40.541236 ignition[1130]: Ignition finished successfully Apr 12 18:55:40.550833 systemd[1]: Finished ignition-fetch.service. Apr 12 18:55:40.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.554221 systemd[1]: Starting ignition-kargs.service... Apr 12 18:55:40.568550 kernel: audit: type=1130 audit(1712948140.551:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.590207 ignition[1136]: Ignition 2.14.0 Apr 12 18:55:40.590222 ignition[1136]: Stage: kargs Apr 12 18:55:40.590447 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:40.590476 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:40.609984 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:40.611656 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:40.613400 ignition[1136]: INFO : PUT result: OK Apr 12 18:55:40.616743 ignition[1136]: kargs: kargs passed Apr 12 18:55:40.616803 ignition[1136]: Ignition finished successfully Apr 12 18:55:40.619469 systemd[1]: Finished ignition-kargs.service. Apr 12 18:55:40.628764 kernel: audit: type=1130 audit(1712948140.617:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.620781 systemd[1]: Starting ignition-disks.service... Apr 12 18:55:40.633117 ignition[1142]: Ignition 2.14.0 Apr 12 18:55:40.633131 ignition[1142]: Stage: disks Apr 12 18:55:40.633332 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:40.633362 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:40.642714 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:40.644301 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:40.645940 ignition[1142]: INFO : PUT result: OK Apr 12 18:55:40.649544 ignition[1142]: disks: disks passed Apr 12 18:55:40.649612 ignition[1142]: Ignition finished successfully Apr 12 18:55:40.650638 systemd[1]: Finished ignition-disks.service. Apr 12 18:55:40.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.655960 systemd[1]: Reached target initrd-root-device.target. Apr 12 18:55:40.666545 kernel: audit: type=1130 audit(1712948140.653:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.664319 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:55:40.666549 systemd[1]: Reached target local-fs.target. Apr 12 18:55:40.667565 systemd[1]: Reached target sysinit.target. Apr 12 18:55:40.669543 systemd[1]: Reached target basic.target. Apr 12 18:55:40.672553 systemd[1]: Starting systemd-fsck-root.service... Apr 12 18:55:40.715376 systemd-fsck[1150]: ROOT: clean, 612/553520 files, 56019/553472 blocks Apr 12 18:55:40.723622 systemd[1]: Finished systemd-fsck-root.service. Apr 12 18:55:40.731422 kernel: audit: type=1130 audit(1712948140.723:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:40.730738 systemd[1]: Mounting sysroot.mount... Apr 12 18:55:40.747834 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Apr 12 18:55:40.749051 systemd[1]: Mounted sysroot.mount. Apr 12 18:55:40.749266 systemd[1]: Reached target initrd-root-fs.target. Apr 12 18:55:40.765130 systemd[1]: Mounting sysroot-usr.mount... Apr 12 18:55:40.772073 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Apr 12 18:55:40.772612 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 12 18:55:40.772658 systemd[1]: Reached target ignition-diskful.target. Apr 12 18:55:40.792676 systemd[1]: Mounted sysroot-usr.mount. Apr 12 18:55:40.810117 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:55:40.825252 systemd[1]: Starting initrd-setup-root.service... Apr 12 18:55:40.835286 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Apr 12 18:55:40.847859 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Apr 12 18:55:40.865635 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:55:40.865886 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:55:40.865910 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:55:40.869377 initrd-setup-root[1196]: cut: /sysroot/etc/group: No such file or directory Apr 12 18:55:40.881843 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:55:40.885734 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:55:40.887338 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Apr 12 18:55:40.903008 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Apr 12 18:55:41.137832 systemd[1]: Finished initrd-setup-root.service. Apr 12 18:55:41.153751 kernel: audit: type=1130 audit(1712948141.138:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.141384 systemd[1]: Starting ignition-mount.service... Apr 12 18:55:41.153497 systemd[1]: Starting sysroot-boot.service... Apr 12 18:55:41.162724 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Apr 12 18:55:41.162872 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Apr 12 18:55:41.216899 systemd[1]: Finished sysroot-boot.service. Apr 12 18:55:41.229874 kernel: audit: type=1130 audit(1712948141.215:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.232800 ignition[1234]: INFO : Ignition 2.14.0 Apr 12 18:55:41.232800 ignition[1234]: INFO : Stage: mount Apr 12 18:55:41.235598 ignition[1234]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:41.235598 ignition[1234]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:41.263271 ignition[1234]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:41.269186 ignition[1234]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:41.274721 ignition[1234]: INFO : PUT result: OK Apr 12 18:55:41.278826 ignition[1234]: INFO : mount: mount passed Apr 12 18:55:41.280125 ignition[1234]: INFO : Ignition finished successfully Apr 12 18:55:41.282362 systemd[1]: Finished ignition-mount.service. Apr 12 18:55:41.283642 systemd[1]: Starting ignition-files.service... Apr 12 18:55:41.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.291837 kernel: audit: type=1130 audit(1712948141.280:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:41.297004 systemd[1]: Mounting sysroot-usr-share-oem.mount... Apr 12 18:55:41.316150 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1242) Apr 12 18:55:41.326522 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 12 18:55:41.326602 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 12 18:55:41.326631 kernel: BTRFS info (device nvme0n1p6): has skinny extents Apr 12 18:55:41.334836 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 12 18:55:41.337317 systemd[1]: Mounted sysroot-usr-share-oem.mount. Apr 12 18:55:41.378079 ignition[1261]: INFO : Ignition 2.14.0 Apr 12 18:55:41.378079 ignition[1261]: INFO : Stage: files Apr 12 18:55:41.387432 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:41.387432 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:41.416886 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:41.422752 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:41.422752 ignition[1261]: INFO : PUT result: OK Apr 12 18:55:41.429142 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Apr 12 18:55:41.441821 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 12 18:55:41.441821 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 12 18:55:41.489569 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 12 18:55:41.491920 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 12 18:55:41.495436 unknown[1261]: wrote ssh authorized keys file for user: core Apr 12 18:55:41.498069 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 12 18:55:41.501161 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:55:41.506659 ignition[1261]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Apr 12 18:55:41.688000 systemd-networkd[1106]: eth0: Gained IPv6LL Apr 12 18:55:42.000517 ignition[1261]: INFO : GET result: OK Apr 12 18:55:42.415775 ignition[1261]: DEBUG : file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Apr 12 18:55:42.420769 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Apr 12 18:55:42.420769 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:55:42.420769 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Apr 12 18:55:42.467233 ignition[1261]: INFO : GET result: OK Apr 12 18:55:42.588034 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Apr 12 18:55:42.590965 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:55:42.590965 ignition[1261]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Apr 12 18:55:42.986047 ignition[1261]: INFO : GET result: OK Apr 12 18:55:43.163934 ignition[1261]: DEBUG : file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Apr 12 18:55:43.167630 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Apr 12 18:55:43.167630 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:55:43.173318 ignition[1261]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubectl: attempt #1 Apr 12 18:55:43.301743 ignition[1261]: INFO : GET result: OK Apr 12 18:55:43.720508 ignition[1261]: DEBUG : file matches expected sum of: a2de71807eb4c41f4d70e5c47fac72ecf3c74984be6c08be0597fc58621baeeddc1b5cc6431ab007eee9bd0a98f8628dd21512b06daaeccfac5837e9792a98a7 Apr 12 18:55:43.724249 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Apr 12 18:55:43.724249 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:55:43.724249 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Apr 12 18:55:43.724249 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Apr 12 18:55:43.724249 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:55:43.760189 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3106352515" Apr 12 18:55:43.769780 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1264) Apr 12 18:55:43.770094 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3106352515": device or resource busy Apr 12 18:55:43.770094 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3106352515", trying btrfs: device or resource busy Apr 12 18:55:43.770094 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3106352515" Apr 12 18:55:43.792680 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3106352515" Apr 12 18:55:43.792680 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3106352515" Apr 12 18:55:43.801514 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3106352515" Apr 12 18:55:43.801514 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Apr 12 18:55:43.801514 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:55:43.801514 ignition[1261]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubeadm: attempt #1 Apr 12 18:55:43.815010 systemd[1]: mnt-oem3106352515.mount: Deactivated successfully. Apr 12 18:55:43.857297 ignition[1261]: INFO : GET result: OK Apr 12 18:55:44.211018 ignition[1261]: DEBUG : file matches expected sum of: 4261cb0319688a0557b3052cce8df9d754abc38d5fc8e0eeeb63a85a2194895fdca5bad464f8516459ed7b1764d7bbb2304f5f434d42bb35f38764b4b00ce663 Apr 12 18:55:44.215493 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Apr 12 18:55:44.215493 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:55:44.215493 ignition[1261]: INFO : GET https://dl.k8s.io/release/v1.29.2/bin/linux/amd64/kubelet: attempt #1 Apr 12 18:55:44.276509 ignition[1261]: INFO : GET result: OK Apr 12 18:55:45.091979 ignition[1261]: DEBUG : file matches expected sum of: d3fef1d4b99415179ecb94d4de953bddb74c0fb0f798265829b899bb031e2ab8c2b60037b79a66405a9b102d3db0d90e9257595f4b11660356de0e2e63744cd7 Apr 12 18:55:45.098287 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Apr 12 18:55:45.101034 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:55:45.104424 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 12 18:55:45.107693 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:55:45.110407 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 12 18:55:45.110407 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:55:45.118798 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 12 18:55:45.123462 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:55:45.127642 ignition[1261]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 12 18:55:45.560498 ignition[1261]: INFO : GET result: OK Apr 12 18:55:45.725209 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 12 18:55:45.728924 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Apr 12 18:55:45.728924 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Apr 12 18:55:45.728924 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:55:45.728924 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 12 18:55:45.752350 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:55:45.755763 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:55:45.774947 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2268599775" Apr 12 18:55:45.779646 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2268599775": device or resource busy Apr 12 18:55:45.779646 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2268599775", trying btrfs: device or resource busy Apr 12 18:55:45.792963 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2268599775" Apr 12 18:55:45.792963 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2268599775" Apr 12 18:55:45.792963 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem2268599775" Apr 12 18:55:45.799335 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem2268599775" Apr 12 18:55:45.806590 systemd[1]: mnt-oem2268599775.mount: Deactivated successfully. Apr 12 18:55:45.810421 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Apr 12 18:55:45.814212 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Apr 12 18:55:45.814212 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:55:45.822613 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3312104426" Apr 12 18:55:45.824904 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3312104426": device or resource busy Apr 12 18:55:45.824904 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3312104426", trying btrfs: device or resource busy Apr 12 18:55:45.824904 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3312104426" Apr 12 18:55:45.832669 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3312104426" Apr 12 18:55:45.832669 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3312104426" Apr 12 18:55:45.832669 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3312104426" Apr 12 18:55:45.832669 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Apr 12 18:55:45.832669 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Apr 12 18:55:45.832669 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Apr 12 18:55:45.873020 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem80711171" Apr 12 18:55:45.877622 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem80711171": device or resource busy Apr 12 18:55:45.877622 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem80711171", trying btrfs: device or resource busy Apr 12 18:55:45.877622 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem80711171" Apr 12 18:55:45.903358 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem80711171" Apr 12 18:55:45.903358 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem80711171" Apr 12 18:55:45.908318 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem80711171" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(14): [started] processing unit "nvidia.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(14): [finished] processing unit "nvidia.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Apr 12 18:55:45.908318 ignition[1261]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Apr 12 18:55:45.974051 kernel: audit: type=1130 audit(1712948145.947:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:45.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1e): [started] setting preset to enabled for "amazon-ssm-agent.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1e): [finished] setting preset to enabled for "amazon-ssm-agent.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:55:45.974207 ignition[1261]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Apr 12 18:55:45.974207 ignition[1261]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 12 18:55:45.974207 ignition[1261]: INFO : files: files passed Apr 12 18:55:45.974207 ignition[1261]: INFO : Ignition finished successfully Apr 12 18:55:46.119166 kernel: audit: type=1130 audit(1712948146.041:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.119202 kernel: audit: type=1131 audit(1712948146.041:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.119216 kernel: audit: type=1130 audit(1712948146.061:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:45.908607 systemd[1]: mnt-oem80711171.mount: Deactivated successfully. Apr 12 18:55:45.946437 systemd[1]: Finished ignition-files.service. Apr 12 18:55:46.125379 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 12 18:55:45.957061 systemd[1]: Starting initrd-setup-root-after-ignition.service... Apr 12 18:55:45.965377 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Apr 12 18:55:45.966496 systemd[1]: Starting ignition-quench.service... Apr 12 18:55:46.021763 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 12 18:55:46.022600 systemd[1]: Finished ignition-quench.service. Apr 12 18:55:46.055578 systemd[1]: Finished initrd-setup-root-after-ignition.service. Apr 12 18:55:46.063145 systemd[1]: Reached target ignition-complete.target. Apr 12 18:55:46.110042 systemd[1]: Starting initrd-parse-etc.service... Apr 12 18:55:46.193493 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 12 18:55:46.215312 kernel: audit: type=1130 audit(1712948146.194:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.216748 kernel: audit: type=1131 audit(1712948146.202:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.193740 systemd[1]: Finished initrd-parse-etc.service. Apr 12 18:55:46.204034 systemd[1]: Reached target initrd-fs.target. Apr 12 18:55:46.216782 systemd[1]: Reached target initrd.target. Apr 12 18:55:46.218439 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Apr 12 18:55:46.219385 systemd[1]: Starting dracut-pre-pivot.service... Apr 12 18:55:46.270413 systemd[1]: Finished dracut-pre-pivot.service. Apr 12 18:55:46.271801 systemd[1]: Starting initrd-cleanup.service... Apr 12 18:55:46.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.281947 kernel: audit: type=1130 audit(1712948146.268:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.289590 systemd[1]: Stopped target nss-lookup.target. Apr 12 18:55:46.290942 systemd[1]: Stopped target remote-cryptsetup.target. Apr 12 18:55:46.294446 systemd[1]: Stopped target timers.target. Apr 12 18:55:46.302946 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 12 18:55:46.304929 systemd[1]: Stopped dracut-pre-pivot.service. Apr 12 18:55:46.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.315058 systemd[1]: Stopped target initrd.target. Apr 12 18:55:46.324847 kernel: audit: type=1131 audit(1712948146.312:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.327276 systemd[1]: Stopped target basic.target. Apr 12 18:55:46.327990 systemd[1]: Stopped target ignition-complete.target. Apr 12 18:55:46.329081 systemd[1]: Stopped target ignition-diskful.target. Apr 12 18:55:46.333822 systemd[1]: Stopped target initrd-root-device.target. Apr 12 18:55:46.337509 systemd[1]: Stopped target remote-fs.target. Apr 12 18:55:46.342236 systemd[1]: Stopped target remote-fs-pre.target. Apr 12 18:55:46.344211 systemd[1]: Stopped target sysinit.target. Apr 12 18:55:46.346383 systemd[1]: Stopped target local-fs.target. Apr 12 18:55:46.353583 systemd[1]: Stopped target local-fs-pre.target. Apr 12 18:55:46.355139 systemd[1]: Stopped target swap.target. Apr 12 18:55:46.357873 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 12 18:55:46.367881 kernel: audit: type=1131 audit(1712948146.359:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.358045 systemd[1]: Stopped dracut-pre-mount.service. Apr 12 18:55:46.367870 systemd[1]: Stopped target cryptsetup.target. Apr 12 18:55:46.370430 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 12 18:55:46.370717 systemd[1]: Stopped dracut-initqueue.service. Apr 12 18:55:46.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.376762 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 12 18:55:46.387971 kernel: audit: type=1131 audit(1712948146.375:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.377076 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Apr 12 18:55:46.389655 systemd[1]: ignition-files.service: Deactivated successfully. Apr 12 18:55:46.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.391997 systemd[1]: Stopped ignition-files.service. Apr 12 18:55:46.397249 systemd[1]: Stopping ignition-mount.service... Apr 12 18:55:46.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.400372 systemd[1]: Stopping sysroot-boot.service... Apr 12 18:55:46.417084 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 12 18:55:46.423165 systemd[1]: Stopped systemd-udev-trigger.service. Apr 12 18:55:46.451723 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 12 18:55:46.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.452238 systemd[1]: Stopped dracut-pre-trigger.service. Apr 12 18:55:46.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.459913 ignition[1299]: INFO : Ignition 2.14.0 Apr 12 18:55:46.459913 ignition[1299]: INFO : Stage: umount Apr 12 18:55:46.462860 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Apr 12 18:55:46.462860 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Apr 12 18:55:46.465542 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 12 18:55:46.465725 systemd[1]: Finished initrd-cleanup.service. Apr 12 18:55:46.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.512540 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 12 18:55:46.515610 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 12 18:55:46.521759 ignition[1299]: INFO : PUT result: OK Apr 12 18:55:46.527522 ignition[1299]: INFO : umount: umount passed Apr 12 18:55:46.533672 ignition[1299]: INFO : Ignition finished successfully Apr 12 18:55:46.536394 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 12 18:55:46.536498 systemd[1]: Stopped ignition-mount.service. Apr 12 18:55:46.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.540226 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 12 18:55:46.540374 systemd[1]: Stopped ignition-disks.service. Apr 12 18:55:46.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.543703 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 12 18:55:46.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.544991 systemd[1]: Stopped ignition-kargs.service. Apr 12 18:55:46.547324 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 12 18:55:46.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.547396 systemd[1]: Stopped ignition-fetch.service. Apr 12 18:55:46.550642 systemd[1]: Stopped target network.target. Apr 12 18:55:46.552736 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 12 18:55:46.553760 systemd[1]: Stopped ignition-fetch-offline.service. Apr 12 18:55:46.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.556675 systemd[1]: Stopped target paths.target. Apr 12 18:55:46.560142 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 12 18:55:46.564958 systemd[1]: Stopped systemd-ask-password-console.path. Apr 12 18:55:46.565093 systemd[1]: Stopped target slices.target. Apr 12 18:55:46.568669 systemd[1]: Stopped target sockets.target. Apr 12 18:55:46.568805 systemd[1]: iscsid.socket: Deactivated successfully. Apr 12 18:55:46.568864 systemd[1]: Closed iscsid.socket. Apr 12 18:55:46.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.571920 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 12 18:55:46.571962 systemd[1]: Closed iscsiuio.socket. Apr 12 18:55:46.574465 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 12 18:55:46.574643 systemd[1]: Stopped ignition-setup.service. Apr 12 18:55:46.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.580412 systemd[1]: Stopping systemd-networkd.service... Apr 12 18:55:46.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.586337 systemd[1]: Stopping systemd-resolved.service... Apr 12 18:55:46.587518 systemd-networkd[1106]: eth0: DHCPv6 lease lost Apr 12 18:55:46.587941 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 12 18:55:46.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.607000 audit: BPF prog-id=9 op=UNLOAD Apr 12 18:55:46.610000 audit: BPF prog-id=6 op=UNLOAD Apr 12 18:55:46.588056 systemd[1]: Stopped sysroot-boot.service. Apr 12 18:55:46.589548 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 12 18:55:46.589597 systemd[1]: Stopped initrd-setup-root.service. Apr 12 18:55:46.591398 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 12 18:55:46.591501 systemd[1]: Stopped systemd-networkd.service. Apr 12 18:55:46.596197 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 12 18:55:46.596305 systemd[1]: Stopped systemd-resolved.service. Apr 12 18:55:46.611576 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 12 18:55:46.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.611641 systemd[1]: Closed systemd-networkd.socket. Apr 12 18:55:46.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.641626 systemd[1]: Stopping network-cleanup.service... Apr 12 18:55:46.645840 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 12 18:55:46.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.645944 systemd[1]: Stopped parse-ip-for-networkd.service. Apr 12 18:55:46.648591 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:55:46.648645 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:55:46.654134 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 12 18:55:46.654184 systemd[1]: Stopped systemd-modules-load.service. Apr 12 18:55:46.658059 systemd[1]: Stopping systemd-udevd.service... Apr 12 18:55:46.667200 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 12 18:55:46.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.667389 systemd[1]: Stopped systemd-udevd.service. Apr 12 18:55:46.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.675751 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 12 18:55:46.676049 systemd[1]: Stopped network-cleanup.service. Apr 12 18:55:46.678631 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 12 18:55:46.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.678693 systemd[1]: Closed systemd-udevd-control.socket. Apr 12 18:55:46.681568 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 12 18:55:46.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.681611 systemd[1]: Closed systemd-udevd-kernel.socket. Apr 12 18:55:46.684215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 12 18:55:46.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.684284 systemd[1]: Stopped dracut-pre-udev.service. Apr 12 18:55:46.687874 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 12 18:55:46.687949 systemd[1]: Stopped dracut-cmdline.service. Apr 12 18:55:46.692397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 12 18:55:46.692473 systemd[1]: Stopped dracut-cmdline-ask.service. Apr 12 18:55:46.707961 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Apr 12 18:55:46.717925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 12 18:55:46.718024 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Apr 12 18:55:46.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.723362 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 12 18:55:46.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.723416 systemd[1]: Stopped kmod-static-nodes.service. Apr 12 18:55:46.728557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 12 18:55:46.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:46.728635 systemd[1]: Stopped systemd-vconsole-setup.service. Apr 12 18:55:46.736836 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 12 18:55:46.736984 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Apr 12 18:55:46.741375 systemd[1]: Reached target initrd-switch-root.target. Apr 12 18:55:46.755155 systemd[1]: Starting initrd-switch-root.service... Apr 12 18:55:46.771680 systemd[1]: Switching root. Apr 12 18:55:46.812936 systemd-journald[185]: Journal stopped Apr 12 18:55:54.135502 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Apr 12 18:55:54.135591 kernel: SELinux: Class mctp_socket not defined in policy. Apr 12 18:55:54.135613 kernel: SELinux: Class anon_inode not defined in policy. Apr 12 18:55:54.135632 kernel: SELinux: the above unknown classes and permissions will be allowed Apr 12 18:55:54.135650 kernel: SELinux: policy capability network_peer_controls=1 Apr 12 18:55:54.135668 kernel: SELinux: policy capability open_perms=1 Apr 12 18:55:54.135686 kernel: SELinux: policy capability extended_socket_class=1 Apr 12 18:55:54.135708 kernel: SELinux: policy capability always_check_network=0 Apr 12 18:55:54.135731 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 12 18:55:54.135748 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 12 18:55:54.135766 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 12 18:55:54.135783 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 12 18:55:54.135855 systemd[1]: Successfully loaded SELinux policy in 132.356ms. Apr 12 18:55:54.135894 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.027ms. Apr 12 18:55:54.135945 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Apr 12 18:55:54.135967 systemd[1]: Detected virtualization amazon. Apr 12 18:55:54.135990 systemd[1]: Detected architecture x86-64. Apr 12 18:55:54.136008 systemd[1]: Detected first boot. Apr 12 18:55:54.136027 systemd[1]: Initializing machine ID from VM UUID. Apr 12 18:55:54.136074 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Apr 12 18:55:54.136107 systemd[1]: Populated /etc with preset unit settings. Apr 12 18:55:54.136128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:55:54.136161 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:55:54.136185 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:55:54.136210 kernel: kauditd_printk_skb: 47 callbacks suppressed Apr 12 18:55:54.136227 kernel: audit: type=1334 audit(1712948153.687:87): prog-id=12 op=LOAD Apr 12 18:55:54.136246 kernel: audit: type=1334 audit(1712948153.687:88): prog-id=3 op=UNLOAD Apr 12 18:55:54.136263 kernel: audit: type=1334 audit(1712948153.690:89): prog-id=13 op=LOAD Apr 12 18:55:54.136281 kernel: audit: type=1334 audit(1712948153.692:90): prog-id=14 op=LOAD Apr 12 18:55:54.136299 kernel: audit: type=1334 audit(1712948153.692:91): prog-id=4 op=UNLOAD Apr 12 18:55:54.136317 kernel: audit: type=1334 audit(1712948153.692:92): prog-id=5 op=UNLOAD Apr 12 18:55:54.136335 kernel: audit: type=1131 audit(1712948153.694:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.136361 systemd[1]: iscsiuio.service: Deactivated successfully. Apr 12 18:55:54.136380 systemd[1]: Stopped iscsiuio.service. Apr 12 18:55:54.136399 systemd[1]: iscsid.service: Deactivated successfully. Apr 12 18:55:54.136418 systemd[1]: Stopped iscsid.service. Apr 12 18:55:54.136514 kernel: audit: type=1334 audit(1712948153.711:94): prog-id=12 op=UNLOAD Apr 12 18:55:54.136537 kernel: audit: type=1131 audit(1712948153.711:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.136556 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 12 18:55:54.136575 kernel: audit: type=1131 audit(1712948153.721:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.136597 systemd[1]: Stopped initrd-switch-root.service. Apr 12 18:55:54.136617 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 12 18:55:54.136637 systemd[1]: Created slice system-addon\x2dconfig.slice. Apr 12 18:55:54.136656 systemd[1]: Created slice system-addon\x2drun.slice. Apr 12 18:55:54.136675 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Apr 12 18:55:54.136693 systemd[1]: Created slice system-getty.slice. Apr 12 18:55:54.136712 systemd[1]: Created slice system-modprobe.slice. Apr 12 18:55:54.136740 systemd[1]: Created slice system-serial\x2dgetty.slice. Apr 12 18:55:54.136762 systemd[1]: Created slice system-system\x2dcloudinit.slice. Apr 12 18:55:54.136782 systemd[1]: Created slice system-systemd\x2dfsck.slice. Apr 12 18:55:54.136802 systemd[1]: Created slice user.slice. Apr 12 18:55:54.136833 systemd[1]: Started systemd-ask-password-console.path. Apr 12 18:55:54.136856 systemd[1]: Started systemd-ask-password-wall.path. Apr 12 18:55:54.136875 systemd[1]: Set up automount boot.automount. Apr 12 18:55:54.136895 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Apr 12 18:55:54.136914 systemd[1]: Stopped target initrd-switch-root.target. Apr 12 18:55:54.136932 systemd[1]: Stopped target initrd-fs.target. Apr 12 18:55:54.136951 systemd[1]: Stopped target initrd-root-fs.target. Apr 12 18:55:54.136970 systemd[1]: Reached target integritysetup.target. Apr 12 18:55:54.136988 systemd[1]: Reached target remote-cryptsetup.target. Apr 12 18:55:54.137006 systemd[1]: Reached target remote-fs.target. Apr 12 18:55:54.137107 systemd[1]: Reached target slices.target. Apr 12 18:55:54.137184 systemd[1]: Reached target swap.target. Apr 12 18:55:54.137210 systemd[1]: Reached target torcx.target. Apr 12 18:55:54.137231 systemd[1]: Reached target veritysetup.target. Apr 12 18:55:54.137250 systemd[1]: Listening on systemd-coredump.socket. Apr 12 18:55:54.137270 systemd[1]: Listening on systemd-initctl.socket. Apr 12 18:55:54.137289 systemd[1]: Listening on systemd-networkd.socket. Apr 12 18:55:54.137308 systemd[1]: Listening on systemd-udevd-control.socket. Apr 12 18:55:54.137327 systemd[1]: Listening on systemd-udevd-kernel.socket. Apr 12 18:55:54.137346 systemd[1]: Listening on systemd-userdbd.socket. Apr 12 18:55:54.137369 systemd[1]: Mounting dev-hugepages.mount... Apr 12 18:55:54.137446 systemd[1]: Mounting dev-mqueue.mount... Apr 12 18:55:54.137469 systemd[1]: Mounting media.mount... Apr 12 18:55:54.137489 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:55:54.137510 systemd[1]: Mounting sys-kernel-debug.mount... Apr 12 18:55:54.137530 systemd[1]: Mounting sys-kernel-tracing.mount... Apr 12 18:55:54.137553 systemd[1]: Mounting tmp.mount... Apr 12 18:55:54.137571 systemd[1]: Starting flatcar-tmpfiles.service... Apr 12 18:55:54.137590 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Apr 12 18:55:54.137612 systemd[1]: Starting kmod-static-nodes.service... Apr 12 18:55:54.137631 systemd[1]: Starting modprobe@configfs.service... Apr 12 18:55:54.137649 systemd[1]: Starting modprobe@dm_mod.service... Apr 12 18:55:54.137669 systemd[1]: Starting modprobe@drm.service... Apr 12 18:55:54.137689 systemd[1]: Starting modprobe@efi_pstore.service... Apr 12 18:55:54.137708 systemd[1]: Starting modprobe@fuse.service... Apr 12 18:55:54.137726 systemd[1]: Starting modprobe@loop.service... Apr 12 18:55:54.137746 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 12 18:55:54.137765 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 12 18:55:54.137787 systemd[1]: Stopped systemd-fsck-root.service. Apr 12 18:55:54.137806 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 12 18:55:54.141905 systemd[1]: Stopped systemd-fsck-usr.service. Apr 12 18:55:54.142095 systemd[1]: Stopped systemd-journald.service. Apr 12 18:55:54.142115 systemd[1]: Starting systemd-journald.service... Apr 12 18:55:54.142143 systemd[1]: Starting systemd-modules-load.service... Apr 12 18:55:54.142162 systemd[1]: Starting systemd-network-generator.service... Apr 12 18:55:54.142179 systemd[1]: Starting systemd-remount-fs.service... Apr 12 18:55:54.142195 systemd[1]: Starting systemd-udev-trigger.service... Apr 12 18:55:54.142217 kernel: loop: module loaded Apr 12 18:55:54.142237 systemd[1]: verity-setup.service: Deactivated successfully. Apr 12 18:55:54.142323 systemd[1]: Stopped verity-setup.service. Apr 12 18:55:54.142349 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 12 18:55:54.142375 systemd[1]: Mounted dev-hugepages.mount. Apr 12 18:55:54.142399 systemd[1]: Mounted dev-mqueue.mount. Apr 12 18:55:54.142422 systemd[1]: Mounted media.mount. Apr 12 18:55:54.142439 systemd[1]: Mounted sys-kernel-debug.mount. Apr 12 18:55:54.142458 kernel: fuse: init (API version 7.34) Apr 12 18:55:54.142483 systemd[1]: Mounted sys-kernel-tracing.mount. Apr 12 18:55:54.142502 systemd[1]: Mounted tmp.mount. Apr 12 18:55:54.142518 systemd[1]: Finished kmod-static-nodes.service. Apr 12 18:55:54.142536 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 12 18:55:54.142553 systemd[1]: Finished modprobe@configfs.service. Apr 12 18:55:54.142572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 12 18:55:54.142595 systemd[1]: Finished modprobe@dm_mod.service. Apr 12 18:55:54.142617 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 12 18:55:54.142635 systemd[1]: Finished modprobe@drm.service. Apr 12 18:55:54.142653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 12 18:55:54.142673 systemd[1]: Finished modprobe@efi_pstore.service. Apr 12 18:55:54.142691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 12 18:55:54.142714 systemd[1]: Finished modprobe@fuse.service. Apr 12 18:55:54.142735 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 12 18:55:54.142755 systemd[1]: Finished modprobe@loop.service. Apr 12 18:55:54.142776 systemd[1]: Finished systemd-modules-load.service. Apr 12 18:55:54.142796 systemd[1]: Finished systemd-network-generator.service. Apr 12 18:55:54.143894 systemd[1]: Finished systemd-remount-fs.service. Apr 12 18:55:54.143927 systemd[1]: Reached target network-pre.target. Apr 12 18:55:54.143956 systemd-journald[1407]: Journal started Apr 12 18:55:54.144029 systemd-journald[1407]: Runtime Journal (/run/log/journal/ec2a9b693a20ef6ecdec85f44857ad42) is 4.8M, max 38.7M, 33.9M free. Apr 12 18:55:47.719000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 12 18:55:47.860000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:55:47.860000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Apr 12 18:55:47.860000 audit: BPF prog-id=10 op=LOAD Apr 12 18:55:47.860000 audit: BPF prog-id=10 op=UNLOAD Apr 12 18:55:47.860000 audit: BPF prog-id=11 op=LOAD Apr 12 18:55:47.860000 audit: BPF prog-id=11 op=UNLOAD Apr 12 18:55:48.167000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Apr 12 18:55:48.167000 audit[1332]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:55:48.167000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:55:48.170000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Apr 12 18:55:48.170000 audit[1332]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:55:48.170000 audit: CWD cwd="/" Apr 12 18:55:48.170000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:48.170000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:48.170000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Apr 12 18:55:53.687000 audit: BPF prog-id=12 op=LOAD Apr 12 18:55:53.687000 audit: BPF prog-id=3 op=UNLOAD Apr 12 18:55:53.690000 audit: BPF prog-id=13 op=LOAD Apr 12 18:55:53.692000 audit: BPF prog-id=14 op=LOAD Apr 12 18:55:53.692000 audit: BPF prog-id=4 op=UNLOAD Apr 12 18:55:53.692000 audit: BPF prog-id=5 op=UNLOAD Apr 12 18:55:53.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.711000 audit: BPF prog-id=12 op=UNLOAD Apr 12 18:55:53.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.976000 audit: BPF prog-id=15 op=LOAD Apr 12 18:55:53.976000 audit: BPF prog-id=16 op=LOAD Apr 12 18:55:53.976000 audit: BPF prog-id=17 op=LOAD Apr 12 18:55:53.976000 audit: BPF prog-id=13 op=UNLOAD Apr 12 18:55:53.976000 audit: BPF prog-id=14 op=UNLOAD Apr 12 18:55:54.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.131000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Apr 12 18:55:54.131000 audit[1407]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=4 a1=7fff9990db90 a2=4000 a3=7fff9990dc2c items=0 ppid=1 pid=1407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:55:54.131000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Apr 12 18:55:54.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:53.685983 systemd[1]: Queued start job for default target multi-user.target. Apr 12 18:55:54.156136 systemd[1]: Mounting sys-fs-fuse-connections.mount... Apr 12 18:55:54.156175 systemd[1]: Mounting sys-kernel-config.mount... Apr 12 18:55:48.150657 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:55:53.696144 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 12 18:55:48.151365 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:55:48.151411 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:55:48.151459 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Apr 12 18:55:48.151474 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="skipped missing lower profile" missing profile=oem Apr 12 18:55:48.151519 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Apr 12 18:55:48.151539 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Apr 12 18:55:48.151798 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Apr 12 18:55:48.151880 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Apr 12 18:55:48.151900 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Apr 12 18:55:48.156977 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Apr 12 18:55:48.157041 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Apr 12 18:55:48.157072 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.3: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.3 Apr 12 18:55:48.157096 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Apr 12 18:55:48.157126 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.3: no such file or directory" path=/var/lib/torcx/store/3510.3.3 Apr 12 18:55:48.157149 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:48Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Apr 12 18:55:52.829478 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:55:52.829886 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:55:52.830327 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:55:52.830570 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Apr 12 18:55:52.830702 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Apr 12 18:55:52.831032 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-04-12T18:55:52Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Apr 12 18:55:54.166041 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 12 18:55:54.166124 systemd[1]: Starting systemd-hwdb-update.service... Apr 12 18:55:54.172843 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 12 18:55:54.180843 systemd[1]: Starting systemd-random-seed.service... Apr 12 18:55:54.180935 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Apr 12 18:55:54.195891 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:55:54.201613 systemd[1]: Started systemd-journald.service. Apr 12 18:55:54.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.203668 systemd[1]: Mounted sys-fs-fuse-connections.mount. Apr 12 18:55:54.205286 systemd[1]: Mounted sys-kernel-config.mount. Apr 12 18:55:54.208900 systemd[1]: Starting systemd-journal-flush.service... Apr 12 18:55:54.211503 systemd[1]: Finished systemd-random-seed.service. Apr 12 18:55:54.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.213623 systemd[1]: Reached target first-boot-complete.target. Apr 12 18:55:54.269308 systemd[1]: Finished flatcar-tmpfiles.service. Apr 12 18:55:54.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.285959 systemd[1]: Starting systemd-sysusers.service... Apr 12 18:55:54.292847 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:55:54.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.306023 systemd-journald[1407]: Time spent on flushing to /var/log/journal/ec2a9b693a20ef6ecdec85f44857ad42 is 48.121ms for 1237 entries. Apr 12 18:55:54.306023 systemd-journald[1407]: System Journal (/var/log/journal/ec2a9b693a20ef6ecdec85f44857ad42) is 8.0M, max 195.6M, 187.6M free. Apr 12 18:55:54.379860 systemd-journald[1407]: Received client request to flush runtime journal. Apr 12 18:55:54.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.324593 systemd[1]: Finished systemd-udev-trigger.service. Apr 12 18:55:54.327741 systemd[1]: Starting systemd-udev-settle.service... Apr 12 18:55:54.381518 udevadm[1446]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 12 18:55:54.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.381535 systemd[1]: Finished systemd-journal-flush.service. Apr 12 18:55:54.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.515104 systemd[1]: Finished systemd-sysusers.service. Apr 12 18:55:54.518702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Apr 12 18:55:54.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:54.681518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Apr 12 18:55:55.350063 systemd[1]: Finished systemd-hwdb-update.service. Apr 12 18:55:55.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:55.351000 audit: BPF prog-id=18 op=LOAD Apr 12 18:55:55.351000 audit: BPF prog-id=19 op=LOAD Apr 12 18:55:55.351000 audit: BPF prog-id=7 op=UNLOAD Apr 12 18:55:55.351000 audit: BPF prog-id=8 op=UNLOAD Apr 12 18:55:55.354573 systemd[1]: Starting systemd-udevd.service... Apr 12 18:55:55.386250 systemd-udevd[1450]: Using default interface naming scheme 'v252'. Apr 12 18:55:55.460809 systemd[1]: Started systemd-udevd.service. Apr 12 18:55:55.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:55.463000 audit: BPF prog-id=20 op=LOAD Apr 12 18:55:55.466458 systemd[1]: Starting systemd-networkd.service... Apr 12 18:55:55.503000 audit: BPF prog-id=21 op=LOAD Apr 12 18:55:55.503000 audit: BPF prog-id=22 op=LOAD Apr 12 18:55:55.503000 audit: BPF prog-id=23 op=LOAD Apr 12 18:55:55.506433 systemd[1]: Starting systemd-userdbd.service... Apr 12 18:55:55.583164 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Apr 12 18:55:55.595520 (udev-worker)[1456]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:55:55.600780 systemd[1]: Started systemd-userdbd.service. Apr 12 18:55:55.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:55.673839 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 12 18:55:55.684060 kernel: ACPI: button: Power Button [PWRF] Apr 12 18:55:55.684165 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 12 18:55:55.689126 kernel: ACPI: button: Sleep Button [SLPF] Apr 12 18:55:55.725444 systemd-networkd[1459]: lo: Link UP Apr 12 18:55:55.725457 systemd-networkd[1459]: lo: Gained carrier Apr 12 18:55:55.726111 systemd-networkd[1459]: Enumeration completed Apr 12 18:55:55.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:55.726231 systemd[1]: Started systemd-networkd.service. Apr 12 18:55:55.729103 systemd[1]: Starting systemd-networkd-wait-online.service... Apr 12 18:55:55.731307 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 12 18:55:55.735834 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 12 18:55:55.736297 systemd-networkd[1459]: eth0: Link UP Apr 12 18:55:55.736616 systemd-networkd[1459]: eth0: Gained carrier Apr 12 18:55:55.746989 systemd-networkd[1459]: eth0: DHCPv4 address 172.31.18.181/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 12 18:55:55.740000 audit[1463]: AVC avc: denied { confidentiality } for pid=1463 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Apr 12 18:55:55.740000 audit[1463]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=560abffc53e0 a1=32194 a2=7f342c1f9bc5 a3=5 items=108 ppid=1450 pid=1463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:55:55.740000 audit: CWD cwd="/" Apr 12 18:55:55.740000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=1 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=2 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=3 name=(null) inode=15368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=4 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=5 name=(null) inode=15369 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=6 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=7 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=8 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=9 name=(null) inode=15371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=10 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=11 name=(null) inode=15372 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=12 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=13 name=(null) inode=15373 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=14 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=15 name=(null) inode=15374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=16 name=(null) inode=15370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=17 name=(null) inode=15375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=18 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=19 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=20 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=21 name=(null) inode=15377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=22 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=23 name=(null) inode=15378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=24 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=25 name=(null) inode=15379 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=26 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=27 name=(null) inode=15380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=28 name=(null) inode=15376 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=29 name=(null) inode=15381 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=30 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=31 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=32 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=33 name=(null) inode=15383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=34 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=35 name=(null) inode=15384 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=36 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=37 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=38 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=39 name=(null) inode=15386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=40 name=(null) inode=15382 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=41 name=(null) inode=15387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=42 name=(null) inode=15367 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=43 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=44 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=45 name=(null) inode=15389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=46 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=47 name=(null) inode=15390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=48 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=49 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=50 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=51 name=(null) inode=15392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=52 name=(null) inode=15388 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=53 name=(null) inode=15393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=55 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=56 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=57 name=(null) inode=15395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=58 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=59 name=(null) inode=15396 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=60 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=61 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=62 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=63 name=(null) inode=15398 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=64 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=65 name=(null) inode=15399 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=66 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=67 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=68 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=69 name=(null) inode=15401 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=70 name=(null) inode=15397 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=71 name=(null) inode=15402 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=72 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=73 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=74 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=75 name=(null) inode=15404 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=76 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=77 name=(null) inode=15405 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=78 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=79 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=80 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=81 name=(null) inode=15407 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=82 name=(null) inode=15403 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=83 name=(null) inode=15408 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=84 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=85 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=86 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=87 name=(null) inode=15410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=88 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=89 name=(null) inode=15411 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=90 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=91 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=92 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=93 name=(null) inode=15413 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=94 name=(null) inode=15409 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=95 name=(null) inode=15414 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=96 name=(null) inode=15394 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=97 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=98 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=99 name=(null) inode=15416 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=100 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=101 name=(null) inode=15417 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=102 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=103 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=104 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=105 name=(null) inode=15419 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=106 name=(null) inode=15415 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PATH item=107 name=(null) inode=15420 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Apr 12 18:55:55.740000 audit: PROCTITLE proctitle="(udev-worker)" Apr 12 18:55:55.792883 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 12 18:55:55.798861 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1460) Apr 12 18:55:55.804837 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Apr 12 18:55:55.809836 kernel: mousedev: PS/2 mouse device common for all mice Apr 12 18:55:55.909883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Apr 12 18:55:56.026291 systemd[1]: Finished systemd-udev-settle.service. Apr 12 18:55:56.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.028736 systemd[1]: Starting lvm2-activation-early.service... Apr 12 18:55:56.092880 lvm[1564]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:55:56.122063 systemd[1]: Finished lvm2-activation-early.service. Apr 12 18:55:56.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.123519 systemd[1]: Reached target cryptsetup.target. Apr 12 18:55:56.125941 systemd[1]: Starting lvm2-activation.service... Apr 12 18:55:56.129921 lvm[1565]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 12 18:55:56.159552 systemd[1]: Finished lvm2-activation.service. Apr 12 18:55:56.161247 systemd[1]: Reached target local-fs-pre.target. Apr 12 18:55:56.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.163049 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 12 18:55:56.163072 systemd[1]: Reached target local-fs.target. Apr 12 18:55:56.164464 systemd[1]: Reached target machines.target. Apr 12 18:55:56.166876 systemd[1]: Starting ldconfig.service... Apr 12 18:55:56.168380 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Apr 12 18:55:56.168444 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:55:56.169783 systemd[1]: Starting systemd-boot-update.service... Apr 12 18:55:56.172418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Apr 12 18:55:56.175142 systemd[1]: Starting systemd-machine-id-commit.service... Apr 12 18:55:56.176477 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:55:56.176635 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Apr 12 18:55:56.177825 systemd[1]: Starting systemd-tmpfiles-setup.service... Apr 12 18:55:56.187010 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1567 (bootctl) Apr 12 18:55:56.188640 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Apr 12 18:55:56.198512 systemd-tmpfiles[1570]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Apr 12 18:55:56.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.202881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Apr 12 18:55:56.204029 systemd-tmpfiles[1570]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 12 18:55:56.205926 systemd-tmpfiles[1570]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 12 18:55:56.438463 systemd-fsck[1575]: fsck.fat 4.2 (2021-01-31) Apr 12 18:55:56.438463 systemd-fsck[1575]: /dev/nvme0n1p1: 789 files, 119240/258078 clusters Apr 12 18:55:56.447905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Apr 12 18:55:56.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.456636 systemd[1]: Mounting boot.mount... Apr 12 18:55:56.481897 systemd[1]: Mounted boot.mount. Apr 12 18:55:56.505183 systemd[1]: Finished systemd-boot-update.service. Apr 12 18:55:56.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.602830 systemd[1]: Finished systemd-tmpfiles-setup.service. Apr 12 18:55:56.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.605553 systemd[1]: Starting audit-rules.service... Apr 12 18:55:56.612000 audit: BPF prog-id=24 op=LOAD Apr 12 18:55:56.617000 audit: BPF prog-id=25 op=LOAD Apr 12 18:55:56.609154 systemd[1]: Starting clean-ca-certificates.service... Apr 12 18:55:56.612195 systemd[1]: Starting systemd-journal-catalog-update.service... Apr 12 18:55:56.615511 systemd[1]: Starting systemd-resolved.service... Apr 12 18:55:56.621028 systemd[1]: Starting systemd-timesyncd.service... Apr 12 18:55:56.625130 systemd[1]: Starting systemd-update-utmp.service... Apr 12 18:55:56.632000 audit[1594]: SYSTEM_BOOT pid=1594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.665839 systemd[1]: Finished clean-ca-certificates.service. Apr 12 18:55:56.672918 systemd[1]: Finished systemd-update-utmp.service. Apr 12 18:55:56.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.674787 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 12 18:55:56.719526 systemd[1]: Finished systemd-journal-catalog-update.service. Apr 12 18:55:56.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Apr 12 18:55:56.771196 systemd[1]: Started systemd-timesyncd.service. Apr 12 18:55:56.770000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Apr 12 18:55:56.770000 audit[1610]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcedd47860 a2=420 a3=0 items=0 ppid=1589 pid=1610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Apr 12 18:55:56.770000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Apr 12 18:55:56.773261 systemd[1]: Finished audit-rules.service. Apr 12 18:55:56.774733 augenrules[1610]: No rules Apr 12 18:55:56.776187 systemd[1]: Reached target time-set.target. Apr 12 18:55:56.798654 systemd-resolved[1592]: Positive Trust Anchors: Apr 12 18:55:56.799112 systemd-resolved[1592]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 12 18:55:56.799280 systemd-resolved[1592]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Apr 12 18:55:56.838206 systemd-resolved[1592]: Defaulting to hostname 'linux'. Apr 12 18:55:56.846315 systemd[1]: Started systemd-resolved.service. Apr 12 18:55:56.851524 systemd[1]: Reached target network.target. Apr 12 18:55:56.852883 systemd[1]: Reached target nss-lookup.target. Apr 12 18:55:57.529198 systemd-resolved[1592]: Clock change detected. Flushing caches. Apr 12 18:55:57.529379 systemd-timesyncd[1593]: Contacted time server 5.78.89.3:123 (0.flatcar.pool.ntp.org). Apr 12 18:55:57.529537 systemd-timesyncd[1593]: Initial clock synchronization to Fri 2024-04-12 18:55:57.529128 UTC. Apr 12 18:55:58.193754 systemd-networkd[1459]: eth0: Gained IPv6LL Apr 12 18:55:58.195709 systemd[1]: Finished systemd-networkd-wait-online.service. Apr 12 18:55:58.202385 systemd[1]: Reached target network-online.target. Apr 12 18:55:58.263403 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 12 18:55:58.264047 systemd[1]: Finished systemd-machine-id-commit.service. Apr 12 18:55:58.612035 ldconfig[1566]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 12 18:55:58.621509 systemd[1]: Finished ldconfig.service. Apr 12 18:55:58.627636 systemd[1]: Starting systemd-update-done.service... Apr 12 18:55:58.640395 systemd[1]: Finished systemd-update-done.service. Apr 12 18:55:58.642305 systemd[1]: Reached target sysinit.target. Apr 12 18:55:58.643509 systemd[1]: Started motdgen.path. Apr 12 18:55:58.644465 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Apr 12 18:55:58.646189 systemd[1]: Started logrotate.timer. Apr 12 18:55:58.647280 systemd[1]: Started mdadm.timer. Apr 12 18:55:58.648219 systemd[1]: Started systemd-tmpfiles-clean.timer. Apr 12 18:55:58.649683 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 12 18:55:58.649731 systemd[1]: Reached target paths.target. Apr 12 18:55:58.651274 systemd[1]: Reached target timers.target. Apr 12 18:55:58.652903 systemd[1]: Listening on dbus.socket. Apr 12 18:55:58.655045 systemd[1]: Starting docker.socket... Apr 12 18:55:58.660717 systemd[1]: Listening on sshd.socket. Apr 12 18:55:58.661907 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:55:58.662450 systemd[1]: Listening on docker.socket. Apr 12 18:55:58.663464 systemd[1]: Reached target sockets.target. Apr 12 18:55:58.664615 systemd[1]: Reached target basic.target. Apr 12 18:55:58.665916 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:55:58.665953 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Apr 12 18:55:58.668355 systemd[1]: Started amazon-ssm-agent.service. Apr 12 18:55:58.671924 systemd[1]: Starting containerd.service... Apr 12 18:55:58.678972 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Apr 12 18:55:58.686778 systemd[1]: Starting dbus.service... Apr 12 18:55:58.691200 systemd[1]: Starting enable-oem-cloudinit.service... Apr 12 18:55:58.694956 systemd[1]: Starting extend-filesystems.service... Apr 12 18:55:58.696662 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Apr 12 18:55:58.698647 systemd[1]: Starting motdgen.service... Apr 12 18:55:58.702878 systemd[1]: Started nvidia.service. Apr 12 18:55:58.706421 systemd[1]: Starting prepare-cni-plugins.service... Apr 12 18:55:58.709152 systemd[1]: Starting prepare-critools.service... Apr 12 18:55:58.713180 systemd[1]: Starting prepare-helm.service... Apr 12 18:55:58.716984 systemd[1]: Starting ssh-key-proc-cmdline.service... Apr 12 18:55:58.721071 systemd[1]: Starting sshd-keygen.service... Apr 12 18:55:58.731395 systemd[1]: Starting systemd-logind.service... Apr 12 18:55:58.734743 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Apr 12 18:55:58.934335 jq[1627]: false Apr 12 18:55:58.734885 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 12 18:55:58.736785 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 12 18:55:58.739285 systemd[1]: Starting update-engine.service... Apr 12 18:55:58.744773 systemd[1]: Starting update-ssh-keys-after-ignition.service... Apr 12 18:55:58.851360 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 12 18:55:58.953059 jq[1638]: true Apr 12 18:55:58.852147 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Apr 12 18:55:58.858257 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 12 18:55:58.962437 tar[1642]: linux-amd64/helm Apr 12 18:55:58.858547 systemd[1]: Finished ssh-key-proc-cmdline.service. Apr 12 18:55:58.964399 tar[1641]: ./ Apr 12 18:55:58.964399 tar[1641]: ./loopback Apr 12 18:55:58.966987 tar[1643]: crictl Apr 12 18:55:59.023359 jq[1657]: true Apr 12 18:55:59.083959 extend-filesystems[1628]: Found nvme0n1 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p1 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p2 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p3 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found usr Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p4 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p6 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p7 Apr 12 18:55:59.087232 extend-filesystems[1628]: Found nvme0n1p9 Apr 12 18:55:59.087232 extend-filesystems[1628]: Checking size of /dev/nvme0n1p9 Apr 12 18:55:59.115217 dbus-daemon[1626]: [system] SELinux support is enabled Apr 12 18:55:59.115487 systemd[1]: Started dbus.service. Apr 12 18:55:59.122562 systemd[1]: motdgen.service: Deactivated successfully. Apr 12 18:55:59.123424 systemd[1]: Finished motdgen.service. Apr 12 18:55:59.125106 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 12 18:55:59.125140 systemd[1]: Reached target system-config.target. Apr 12 18:55:59.126547 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 12 18:55:59.126592 systemd[1]: Reached target user-config.target. Apr 12 18:55:59.167309 dbus-daemon[1626]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1459 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 12 18:55:59.175906 systemd[1]: Starting systemd-hostnamed.service... Apr 12 18:55:59.235277 amazon-ssm-agent[1623]: 2024/04/12 18:55:59 Failed to load instance info from vault. RegistrationKey does not exist. Apr 12 18:55:59.241018 extend-filesystems[1628]: Resized partition /dev/nvme0n1p9 Apr 12 18:55:59.253061 extend-filesystems[1691]: resize2fs 1.46.5 (30-Dec-2021) Apr 12 18:55:59.259631 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Apr 12 18:55:59.291654 update_engine[1637]: I0412 18:55:59.290964 1637 main.cc:92] Flatcar Update Engine starting Apr 12 18:55:59.362955 update_engine[1637]: I0412 18:55:59.324641 1637 update_check_scheduler.cc:74] Next update check in 6m45s Apr 12 18:55:59.316017 systemd[1]: Started update-engine.service. Apr 12 18:55:59.322535 systemd[1]: Started locksmithd.service. Apr 12 18:55:59.366304 amazon-ssm-agent[1623]: Initializing new seelog logger Apr 12 18:55:59.366304 amazon-ssm-agent[1623]: New Seelog Logger Creation Complete Apr 12 18:55:59.366304 amazon-ssm-agent[1623]: 2024/04/12 18:55:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 12 18:55:59.366304 amazon-ssm-agent[1623]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 12 18:55:59.366304 amazon-ssm-agent[1623]: 2024/04/12 18:55:59 processing appconfig overrides Apr 12 18:55:59.390665 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Apr 12 18:55:59.448642 extend-filesystems[1691]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 12 18:55:59.448642 extend-filesystems[1691]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 12 18:55:59.448642 extend-filesystems[1691]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Apr 12 18:55:59.446459 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 12 18:55:59.461558 bash[1695]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:55:59.461693 extend-filesystems[1628]: Resized filesystem in /dev/nvme0n1p9 Apr 12 18:55:59.469425 env[1645]: time="2024-04-12T18:55:59.457343724Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Apr 12 18:55:59.446702 systemd[1]: Finished extend-filesystems.service. Apr 12 18:55:59.460957 systemd[1]: Finished update-ssh-keys-after-ignition.service. Apr 12 18:55:59.507430 systemd-logind[1636]: Watching system buttons on /dev/input/event1 (Power Button) Apr 12 18:55:59.512678 systemd-logind[1636]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 12 18:55:59.513652 systemd-logind[1636]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 12 18:55:59.518752 systemd-logind[1636]: New seat seat0. Apr 12 18:55:59.538066 systemd[1]: Started systemd-logind.service. Apr 12 18:55:59.546237 tar[1641]: ./bandwidth Apr 12 18:55:59.685157 systemd[1]: nvidia.service: Deactivated successfully. Apr 12 18:55:59.766876 env[1645]: time="2024-04-12T18:55:59.766762136Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 12 18:55:59.767294 env[1645]: time="2024-04-12T18:55:59.767258639Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.791118 env[1645]: time="2024-04-12T18:55:59.791063292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.154-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:55:59.792368 env[1645]: time="2024-04-12T18:55:59.792318693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.802914 env[1645]: time="2024-04-12T18:55:59.802860816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:55:59.803095 env[1645]: time="2024-04-12T18:55:59.803071944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.803183 env[1645]: time="2024-04-12T18:55:59.803165636Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Apr 12 18:55:59.803257 env[1645]: time="2024-04-12T18:55:59.803241679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.803467 env[1645]: time="2024-04-12T18:55:59.803444980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.804086 env[1645]: time="2024-04-12T18:55:59.804059490Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 12 18:55:59.804432 env[1645]: time="2024-04-12T18:55:59.804401267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 12 18:55:59.804531 env[1645]: time="2024-04-12T18:55:59.804514056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 12 18:55:59.804693 env[1645]: time="2024-04-12T18:55:59.804674357Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Apr 12 18:55:59.804794 env[1645]: time="2024-04-12T18:55:59.804778446Z" level=info msg="metadata content store policy set" policy=shared Apr 12 18:55:59.818684 env[1645]: time="2024-04-12T18:55:59.818637014Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 12 18:55:59.818870 env[1645]: time="2024-04-12T18:55:59.818853358Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 12 18:55:59.818991 env[1645]: time="2024-04-12T18:55:59.818931057Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 12 18:55:59.819259 env[1645]: time="2024-04-12T18:55:59.819235820Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819431 env[1645]: time="2024-04-12T18:55:59.819414947Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819509 env[1645]: time="2024-04-12T18:55:59.819495936Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819601 env[1645]: time="2024-04-12T18:55:59.819565804Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819676 env[1645]: time="2024-04-12T18:55:59.819662602Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819754 env[1645]: time="2024-04-12T18:55:59.819739630Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819823 env[1645]: time="2024-04-12T18:55:59.819810283Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819891 env[1645]: time="2024-04-12T18:55:59.819877484Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.819966 env[1645]: time="2024-04-12T18:55:59.819952912Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 12 18:55:59.820383 env[1645]: time="2024-04-12T18:55:59.820352937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 12 18:55:59.820630 env[1645]: time="2024-04-12T18:55:59.820611141Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 12 18:55:59.821251 env[1645]: time="2024-04-12T18:55:59.821228922Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 12 18:55:59.821651 env[1645]: time="2024-04-12T18:55:59.821627756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.821762 env[1645]: time="2024-04-12T18:55:59.821743412Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 12 18:55:59.821939 env[1645]: time="2024-04-12T18:55:59.821923022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.824078 env[1645]: time="2024-04-12T18:55:59.824049856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.824188 env[1645]: time="2024-04-12T18:55:59.824172047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.824275 env[1645]: time="2024-04-12T18:55:59.824260133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.824693 env[1645]: time="2024-04-12T18:55:59.824670166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.826648 env[1645]: time="2024-04-12T18:55:59.826620462Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.826773 env[1645]: time="2024-04-12T18:55:59.826753433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.826970 env[1645]: time="2024-04-12T18:55:59.826953747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.827563 env[1645]: time="2024-04-12T18:55:59.827543225Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 12 18:55:59.827752 tar[1641]: ./ptp Apr 12 18:55:59.827907 env[1645]: time="2024-04-12T18:55:59.827880998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.827963 env[1645]: time="2024-04-12T18:55:59.827916600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.827963 env[1645]: time="2024-04-12T18:55:59.827938767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.827963 env[1645]: time="2024-04-12T18:55:59.827958118Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 12 18:55:59.828155 env[1645]: time="2024-04-12T18:55:59.827981179Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Apr 12 18:55:59.828155 env[1645]: time="2024-04-12T18:55:59.827998657Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 12 18:55:59.828155 env[1645]: time="2024-04-12T18:55:59.828024787Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Apr 12 18:55:59.828155 env[1645]: time="2024-04-12T18:55:59.828071754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 12 18:55:59.828445 env[1645]: time="2024-04-12T18:55:59.828381615Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 12 18:55:59.832701 env[1645]: time="2024-04-12T18:55:59.828464518Z" level=info msg="Connect containerd service" Apr 12 18:55:59.832701 env[1645]: time="2024-04-12T18:55:59.828517579Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 12 18:55:59.828988 systemd[1]: Started systemd-hostnamed.service. Apr 12 18:55:59.828801 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 12 18:55:59.829664 dbus-daemon[1626]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1680 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 12 18:55:59.836347 systemd[1]: Starting polkit.service... Apr 12 18:55:59.847442 coreos-metadata[1625]: Apr 12 18:55:59.846 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 12 18:55:59.856295 env[1645]: time="2024-04-12T18:55:59.856249003Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:55:59.860729 coreos-metadata[1625]: Apr 12 18:55:59.860 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Apr 12 18:55:59.863843 env[1645]: time="2024-04-12T18:55:59.863777130Z" level=info msg="Start subscribing containerd event" Apr 12 18:55:59.864058 coreos-metadata[1625]: Apr 12 18:55:59.863 INFO Fetch successful Apr 12 18:55:59.864211 coreos-metadata[1625]: Apr 12 18:55:59.864 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 12 18:55:59.867044 coreos-metadata[1625]: Apr 12 18:55:59.866 INFO Fetch successful Apr 12 18:55:59.874232 env[1645]: time="2024-04-12T18:55:59.874186002Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 12 18:55:59.877384 env[1645]: time="2024-04-12T18:55:59.874278138Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 12 18:55:59.877384 env[1645]: time="2024-04-12T18:55:59.874346549Z" level=info msg="containerd successfully booted in 0.479340s" Apr 12 18:55:59.874443 systemd[1]: Started containerd.service. Apr 12 18:55:59.877736 unknown[1625]: wrote ssh authorized keys file for user: core Apr 12 18:55:59.879733 env[1645]: time="2024-04-12T18:55:59.879651429Z" level=info msg="Start recovering state" Apr 12 18:55:59.879846 env[1645]: time="2024-04-12T18:55:59.879810568Z" level=info msg="Start event monitor" Apr 12 18:55:59.879846 env[1645]: time="2024-04-12T18:55:59.879832558Z" level=info msg="Start snapshots syncer" Apr 12 18:55:59.879937 env[1645]: time="2024-04-12T18:55:59.879847579Z" level=info msg="Start cni network conf syncer for default" Apr 12 18:55:59.879937 env[1645]: time="2024-04-12T18:55:59.879859188Z" level=info msg="Start streaming server" Apr 12 18:55:59.892338 polkitd[1764]: Started polkitd version 121 Apr 12 18:55:59.914227 polkitd[1764]: Loading rules from directory /etc/polkit-1/rules.d Apr 12 18:55:59.914556 polkitd[1764]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 12 18:55:59.917178 polkitd[1764]: Finished loading, compiling and executing 2 rules Apr 12 18:55:59.922226 update-ssh-keys[1766]: Updated "/home/core/.ssh/authorized_keys" Apr 12 18:55:59.922646 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Apr 12 18:55:59.931101 dbus-daemon[1626]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 12 18:55:59.931288 systemd[1]: Started polkit.service. Apr 12 18:55:59.934287 polkitd[1764]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 12 18:55:59.991644 systemd-resolved[1592]: System hostname changed to 'ip-172-31-18-181'. Apr 12 18:55:59.991750 systemd-hostnamed[1680]: Hostname set to (transient) Apr 12 18:56:00.190489 tar[1641]: ./vlan Apr 12 18:56:00.405286 tar[1641]: ./host-device Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Create new startup processor Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [LongRunningPluginsManager] registered plugins: {} Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing bookkeeping folders Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO removing the completed state files Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing bookkeeping folders for long running plugins Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing healthcheck folders for long running plugins Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing locations for inventory plugin Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing default location for custom inventory Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing default location for file inventory Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Initializing default location for role inventory Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Init the cloudwatchlogs publisher Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:updateSsmAgent Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:configureDocker Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:runDockerAction Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:configurePackage Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:downloadContent Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:runDocument Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:softwareInventory Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:refreshAssociation Apr 12 18:56:00.417264 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform independent plugin aws:runPowerShellScript Apr 12 18:56:00.418945 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Successfully loaded platform dependent plugin aws:runShellScript Apr 12 18:56:00.418945 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Apr 12 18:56:00.418945 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO OS: linux, Arch: amd64 Apr 12 18:56:00.419766 amazon-ssm-agent[1623]: datastore file /var/lib/amazon/ssm/i-01769c9b13ddd8421/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Apr 12 18:56:00.522755 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] Starting document processing engine... Apr 12 18:56:00.555696 tar[1641]: ./tuning Apr 12 18:56:00.616662 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [EngineProcessor] Starting Apr 12 18:56:00.670847 tar[1641]: ./vrf Apr 12 18:56:00.710942 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Apr 12 18:56:00.766880 tar[1641]: ./sbr Apr 12 18:56:00.805907 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] Starting message polling Apr 12 18:56:00.900506 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] Starting send replies to MDS Apr 12 18:56:00.909092 tar[1641]: ./tap Apr 12 18:56:00.995373 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [instanceID=i-01769c9b13ddd8421] Starting association polling Apr 12 18:56:01.049160 tar[1641]: ./dhcp Apr 12 18:56:01.094015 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Apr 12 18:56:01.189892 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [Association] Launching response handler Apr 12 18:56:01.297402 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Apr 12 18:56:01.393520 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Apr 12 18:56:01.489437 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Apr 12 18:56:01.585506 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [HealthCheck] HealthCheck reporting agent health. Apr 12 18:56:01.690496 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] Starting session document processing engine... Apr 12 18:56:01.787936 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] [EngineProcessor] Starting Apr 12 18:56:01.878046 tar[1641]: ./static Apr 12 18:56:01.885083 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Apr 12 18:56:01.982476 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-01769c9b13ddd8421, requestId: 6df1ea1e-283d-4523-a18d-28c8ff96098c Apr 12 18:56:02.079982 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [OfflineService] Starting document processing engine... Apr 12 18:56:02.131117 tar[1642]: linux-amd64/LICENSE Apr 12 18:56:02.133888 tar[1642]: linux-amd64/README.md Apr 12 18:56:02.134820 tar[1641]: ./firewall Apr 12 18:56:02.168356 systemd[1]: Finished prepare-helm.service. Apr 12 18:56:02.177254 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [OfflineService] [EngineProcessor] Starting Apr 12 18:56:02.195193 systemd[1]: Finished prepare-critools.service. Apr 12 18:56:02.275665 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [OfflineService] [EngineProcessor] Initial processing Apr 12 18:56:02.347187 tar[1641]: ./macvlan Apr 12 18:56:02.375295 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [OfflineService] Starting message polling Apr 12 18:56:02.465084 tar[1641]: ./dummy Apr 12 18:56:02.473290 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [OfflineService] Starting send replies to MDS Apr 12 18:56:02.525620 tar[1641]: ./bridge Apr 12 18:56:02.571473 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [LongRunningPluginsManager] starting long running plugin manager Apr 12 18:56:02.586916 tar[1641]: ./ipvlan Apr 12 18:56:02.669861 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Apr 12 18:56:02.683566 tar[1641]: ./portmap Apr 12 18:56:02.770701 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] listening reply. Apr 12 18:56:02.773128 tar[1641]: ./host-local Apr 12 18:56:02.870893 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Apr 12 18:56:02.878149 systemd[1]: Finished prepare-cni-plugins.service. Apr 12 18:56:02.968563 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [StartupProcessor] Executing startup processor tasks Apr 12 18:56:03.054672 locksmithd[1704]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 12 18:56:03.067688 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Apr 12 18:56:03.167161 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Apr 12 18:56:03.188597 sshd_keygen[1668]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 12 18:56:03.266828 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.3 Apr 12 18:56:03.282843 systemd[1]: Finished sshd-keygen.service. Apr 12 18:56:03.288701 systemd[1]: Starting issuegen.service... Apr 12 18:56:03.306032 systemd[1]: issuegen.service: Deactivated successfully. Apr 12 18:56:03.306298 systemd[1]: Finished issuegen.service. Apr 12 18:56:03.311714 systemd[1]: Starting systemd-user-sessions.service... Apr 12 18:56:03.321456 systemd[1]: Finished systemd-user-sessions.service. Apr 12 18:56:03.328438 systemd[1]: Started getty@tty1.service. Apr 12 18:56:03.331248 systemd[1]: Started serial-getty@ttyS0.service. Apr 12 18:56:03.332797 systemd[1]: Reached target getty.target. Apr 12 18:56:03.334195 systemd[1]: Reached target multi-user.target. Apr 12 18:56:03.336944 systemd[1]: Starting systemd-update-utmp-runlevel.service... Apr 12 18:56:03.347279 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Apr 12 18:56:03.347453 systemd[1]: Finished systemd-update-utmp-runlevel.service. Apr 12 18:56:03.348881 systemd[1]: Startup finished in 761ms (kernel) + 11.785s (initrd) + 15.148s (userspace) = 27.696s. Apr 12 18:56:03.366444 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01769c9b13ddd8421?role=subscribe&stream=input Apr 12 18:56:03.466348 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-01769c9b13ddd8421?role=subscribe&stream=input Apr 12 18:56:03.566261 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] Starting receiving message from control channel Apr 12 18:56:03.666683 amazon-ssm-agent[1623]: 2024-04-12 18:56:00 INFO [MessageGatewayService] [EngineProcessor] Initial processing Apr 12 18:56:07.193916 systemd[1]: Created slice system-sshd.slice. Apr 12 18:56:07.196599 systemd[1]: Started sshd@0-172.31.18.181:22-147.75.109.163:55684.service. Apr 12 18:56:07.484244 sshd[1839]: Accepted publickey for core from 147.75.109.163 port 55684 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:56:07.487651 sshd[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:07.535359 systemd[1]: Created slice user-500.slice. Apr 12 18:56:07.542804 systemd[1]: Starting user-runtime-dir@500.service... Apr 12 18:56:07.555671 systemd-logind[1636]: New session 1 of user core. Apr 12 18:56:07.562706 systemd[1]: Finished user-runtime-dir@500.service. Apr 12 18:56:07.564859 systemd[1]: Starting user@500.service... Apr 12 18:56:07.569417 (systemd)[1842]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:07.721046 systemd[1842]: Queued start job for default target default.target. Apr 12 18:56:07.722313 systemd[1842]: Reached target paths.target. Apr 12 18:56:07.722348 systemd[1842]: Reached target sockets.target. Apr 12 18:56:07.722368 systemd[1842]: Reached target timers.target. Apr 12 18:56:07.722386 systemd[1842]: Reached target basic.target. Apr 12 18:56:07.722510 systemd[1]: Started user@500.service. Apr 12 18:56:07.726262 systemd[1]: Started session-1.scope. Apr 12 18:56:07.734969 systemd[1842]: Reached target default.target. Apr 12 18:56:07.736518 systemd[1842]: Startup finished in 155ms. Apr 12 18:56:07.890000 systemd[1]: Started sshd@1-172.31.18.181:22-147.75.109.163:55690.service. Apr 12 18:56:08.062405 sshd[1851]: Accepted publickey for core from 147.75.109.163 port 55690 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:56:08.063987 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:08.073152 systemd[1]: Started session-2.scope. Apr 12 18:56:08.074296 systemd-logind[1636]: New session 2 of user core. Apr 12 18:56:08.221149 sshd[1851]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:08.228140 systemd[1]: sshd@1-172.31.18.181:22-147.75.109.163:55690.service: Deactivated successfully. Apr 12 18:56:08.234397 systemd[1]: session-2.scope: Deactivated successfully. Apr 12 18:56:08.237886 systemd-logind[1636]: Session 2 logged out. Waiting for processes to exit. Apr 12 18:56:08.243967 systemd-logind[1636]: Removed session 2. Apr 12 18:56:08.256841 systemd[1]: Started sshd@2-172.31.18.181:22-147.75.109.163:55694.service. Apr 12 18:56:08.458621 sshd[1857]: Accepted publickey for core from 147.75.109.163 port 55694 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:56:08.459706 sshd[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:08.470030 systemd[1]: Started session-3.scope. Apr 12 18:56:08.470598 systemd-logind[1636]: New session 3 of user core. Apr 12 18:56:08.599186 sshd[1857]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:08.603087 systemd[1]: sshd@2-172.31.18.181:22-147.75.109.163:55694.service: Deactivated successfully. Apr 12 18:56:08.604047 systemd[1]: session-3.scope: Deactivated successfully. Apr 12 18:56:08.604829 systemd-logind[1636]: Session 3 logged out. Waiting for processes to exit. Apr 12 18:56:08.605945 systemd-logind[1636]: Removed session 3. Apr 12 18:56:08.626059 systemd[1]: Started sshd@3-172.31.18.181:22-147.75.109.163:55710.service. Apr 12 18:56:08.803388 sshd[1863]: Accepted publickey for core from 147.75.109.163 port 55710 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:56:08.805274 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:08.825644 systemd-logind[1636]: New session 4 of user core. Apr 12 18:56:08.825683 systemd[1]: Started session-4.scope. Apr 12 18:56:08.998188 sshd[1863]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:09.006050 systemd[1]: sshd@3-172.31.18.181:22-147.75.109.163:55710.service: Deactivated successfully. Apr 12 18:56:09.009730 systemd[1]: session-4.scope: Deactivated successfully. Apr 12 18:56:09.012529 systemd-logind[1636]: Session 4 logged out. Waiting for processes to exit. Apr 12 18:56:09.014174 systemd-logind[1636]: Removed session 4. Apr 12 18:56:09.024796 systemd[1]: Started sshd@4-172.31.18.181:22-147.75.109.163:55716.service. Apr 12 18:56:09.218411 sshd[1869]: Accepted publickey for core from 147.75.109.163 port 55716 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:56:09.220337 sshd[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:56:09.232493 systemd[1]: Started session-5.scope. Apr 12 18:56:09.232977 systemd-logind[1636]: New session 5 of user core. Apr 12 18:56:09.396301 sudo[1872]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 12 18:56:09.396660 sudo[1872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Apr 12 18:56:10.006652 systemd[1]: Starting docker.service... Apr 12 18:56:10.063729 env[1887]: time="2024-04-12T18:56:10.063625770Z" level=info msg="Starting up" Apr 12 18:56:10.065609 env[1887]: time="2024-04-12T18:56:10.065501326Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:56:10.065870 env[1887]: time="2024-04-12T18:56:10.065796049Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:56:10.065998 env[1887]: time="2024-04-12T18:56:10.065977479Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:56:10.066074 env[1887]: time="2024-04-12T18:56:10.066060623Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:56:10.070036 env[1887]: time="2024-04-12T18:56:10.069973827Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 12 18:56:10.070036 env[1887]: time="2024-04-12T18:56:10.070030036Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 12 18:56:10.070399 env[1887]: time="2024-04-12T18:56:10.070053330Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Apr 12 18:56:10.070399 env[1887]: time="2024-04-12T18:56:10.070066186Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 12 18:56:10.081317 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4152270172-merged.mount: Deactivated successfully. Apr 12 18:56:10.220733 env[1887]: time="2024-04-12T18:56:10.220690775Z" level=info msg="Loading containers: start." Apr 12 18:56:10.481761 kernel: Initializing XFRM netlink socket Apr 12 18:56:10.575511 env[1887]: time="2024-04-12T18:56:10.575461853Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 12 18:56:10.578986 (udev-worker)[1896]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:56:10.771825 systemd-networkd[1459]: docker0: Link UP Apr 12 18:56:10.789017 env[1887]: time="2024-04-12T18:56:10.788928863Z" level=info msg="Loading containers: done." Apr 12 18:56:10.807441 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2282965804-merged.mount: Deactivated successfully. Apr 12 18:56:10.823242 env[1887]: time="2024-04-12T18:56:10.823188743Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 12 18:56:10.823495 env[1887]: time="2024-04-12T18:56:10.823436581Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Apr 12 18:56:10.823616 env[1887]: time="2024-04-12T18:56:10.823595438Z" level=info msg="Daemon has completed initialization" Apr 12 18:56:10.848969 systemd[1]: Started docker.service. Apr 12 18:56:10.861777 env[1887]: time="2024-04-12T18:56:10.861689796Z" level=info msg="API listen on /run/docker.sock" Apr 12 18:56:10.888217 systemd[1]: Reloading. Apr 12 18:56:11.059864 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-04-12T18:56:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:56:11.061415 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-04-12T18:56:11Z" level=info msg="torcx already run" Apr 12 18:56:11.192183 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:56:11.192208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:56:11.222703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:56:11.374393 systemd[1]: Started kubelet.service. Apr 12 18:56:11.505165 kubelet[2075]: E0412 18:56:11.505025 2075 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:56:11.507214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:56:11.507337 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:56:12.219751 env[1645]: time="2024-04-12T18:56:12.218851829Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\"" Apr 12 18:56:12.963770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263949269.mount: Deactivated successfully. Apr 12 18:56:15.442435 env[1645]: time="2024-04-12T18:56:15.442197868Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:15.446657 env[1645]: time="2024-04-12T18:56:15.446524448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:15.450827 env[1645]: time="2024-04-12T18:56:15.450780071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:15.454142 env[1645]: time="2024-04-12T18:56:15.454094671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ebd35bc7ef24672c5c50ffccb21f71307a82d4fb20c0ecb6d3d27b28b69e0e3c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:15.455129 env[1645]: time="2024-04-12T18:56:15.455016743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.3\" returns image reference \"sha256:39f995c9f199675725a38b0d9f19f99652f978861e631729f2ec4fd8efaab533\"" Apr 12 18:56:15.475207 env[1645]: time="2024-04-12T18:56:15.475163684Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\"" Apr 12 18:56:18.627433 env[1645]: time="2024-04-12T18:56:18.627265359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:18.635474 env[1645]: time="2024-04-12T18:56:18.635427706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:18.638012 env[1645]: time="2024-04-12T18:56:18.637971948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:18.640400 env[1645]: time="2024-04-12T18:56:18.640360484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:5a7968649f8aee83d5a2d75d6d377ba2680df25b0b97b3be12fa10f15ad67104,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:18.641076 env[1645]: time="2024-04-12T18:56:18.641035845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.3\" returns image reference \"sha256:6052a25da3f97387a8a5a9711fbff373801dcea4b0487add79dc3903c4bf14b3\"" Apr 12 18:56:18.671167 env[1645]: time="2024-04-12T18:56:18.671113747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\"" Apr 12 18:56:20.530472 env[1645]: time="2024-04-12T18:56:20.530420887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:20.534687 env[1645]: time="2024-04-12T18:56:20.534644945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:20.541335 env[1645]: time="2024-04-12T18:56:20.541279199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:20.542448 env[1645]: time="2024-04-12T18:56:20.542412138Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6fb91d791db6d62f6b1ac9dbed23fdb597335550d99ff8333d53c4136e889b3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:20.543334 env[1645]: time="2024-04-12T18:56:20.543290753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.3\" returns image reference \"sha256:8c390d98f50c0b8f564e172a80565384dc9eeb7e16b5a6794c616706206dee3b\"" Apr 12 18:56:20.587168 env[1645]: time="2024-04-12T18:56:20.587115954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\"" Apr 12 18:56:21.758704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 12 18:56:21.758963 systemd[1]: Stopped kubelet.service. Apr 12 18:56:21.761409 systemd[1]: Started kubelet.service. Apr 12 18:56:21.884080 kubelet[2110]: E0412 18:56:21.884036 2110 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:56:21.889668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:56:21.889960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:56:22.229213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374754753.mount: Deactivated successfully. Apr 12 18:56:23.253000 env[1645]: time="2024-04-12T18:56:23.252944891Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:23.259878 env[1645]: time="2024-04-12T18:56:23.259829366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:23.262429 env[1645]: time="2024-04-12T18:56:23.262377147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:23.264983 env[1645]: time="2024-04-12T18:56:23.264943258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fa87cba052adcb992bd59bd1304115c6f3b3fb370407805ba52af3d9ff3f0863,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:23.265360 env[1645]: time="2024-04-12T18:56:23.265320837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.3\" returns image reference \"sha256:a1d263b5dc5b0acea099d5e91a3a041b6704392ad95e5ea3b5bbe4f71784e392\"" Apr 12 18:56:23.277080 env[1645]: time="2024-04-12T18:56:23.277041550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 12 18:56:24.044049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3592802702.mount: Deactivated successfully. Apr 12 18:56:24.070855 amazon-ssm-agent[1623]: 2024-04-12 18:56:24 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Apr 12 18:56:25.722470 env[1645]: time="2024-04-12T18:56:25.722411516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:25.725300 env[1645]: time="2024-04-12T18:56:25.725254659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:25.728202 env[1645]: time="2024-04-12T18:56:25.728159255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:25.731382 env[1645]: time="2024-04-12T18:56:25.731335365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:25.732662 env[1645]: time="2024-04-12T18:56:25.732619491Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Apr 12 18:56:25.752954 env[1645]: time="2024-04-12T18:56:25.752909686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 12 18:56:26.282297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195844758.mount: Deactivated successfully. Apr 12 18:56:26.293446 env[1645]: time="2024-04-12T18:56:26.293391690Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:26.296770 env[1645]: time="2024-04-12T18:56:26.296721877Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:26.299879 env[1645]: time="2024-04-12T18:56:26.299835333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:26.302992 env[1645]: time="2024-04-12T18:56:26.302888115Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:26.303459 env[1645]: time="2024-04-12T18:56:26.303422748Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Apr 12 18:56:26.316998 env[1645]: time="2024-04-12T18:56:26.316930589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Apr 12 18:56:26.948995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898205247.mount: Deactivated successfully. Apr 12 18:56:30.027267 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 12 18:56:30.906680 env[1645]: time="2024-04-12T18:56:30.906616190Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:30.946197 env[1645]: time="2024-04-12T18:56:30.946148708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:30.964736 env[1645]: time="2024-04-12T18:56:30.963876267Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:30.989137 env[1645]: time="2024-04-12T18:56:30.989081694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:30.989588 env[1645]: time="2024-04-12T18:56:30.989540080Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Apr 12 18:56:32.141835 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 12 18:56:32.142216 systemd[1]: Stopped kubelet.service. Apr 12 18:56:32.154864 systemd[1]: Started kubelet.service. Apr 12 18:56:32.271842 kubelet[2196]: E0412 18:56:32.271733 2196 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 12 18:56:32.275835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 12 18:56:32.276006 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 12 18:56:34.551781 systemd[1]: Stopped kubelet.service. Apr 12 18:56:34.571978 systemd[1]: Reloading. Apr 12 18:56:34.681359 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2024-04-12T18:56:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:56:34.681848 /usr/lib/systemd/system-generators/torcx-generator[2225]: time="2024-04-12T18:56:34Z" level=info msg="torcx already run" Apr 12 18:56:34.782020 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:56:34.782045 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:56:34.804739 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:56:34.955944 systemd[1]: Started kubelet.service. Apr 12 18:56:35.012992 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:56:35.012992 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:56:35.012992 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:56:35.013521 kubelet[2278]: I0412 18:56:35.013061 2278 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:56:35.526355 kubelet[2278]: I0412 18:56:35.526311 2278 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:56:35.526355 kubelet[2278]: I0412 18:56:35.526349 2278 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:56:35.526674 kubelet[2278]: I0412 18:56:35.526651 2278 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:56:35.534506 kubelet[2278]: E0412 18:56:35.534475 2278 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.534922 kubelet[2278]: I0412 18:56:35.534901 2278 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:56:35.540957 kubelet[2278]: I0412 18:56:35.540926 2278 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:56:35.541228 kubelet[2278]: I0412 18:56:35.541206 2278 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:56:35.541428 kubelet[2278]: I0412 18:56:35.541406 2278 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:56:35.541561 kubelet[2278]: I0412 18:56:35.541438 2278 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:56:35.541561 kubelet[2278]: I0412 18:56:35.541453 2278 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:56:35.541679 kubelet[2278]: I0412 18:56:35.541605 2278 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:56:35.541732 kubelet[2278]: I0412 18:56:35.541719 2278 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:56:35.541779 kubelet[2278]: I0412 18:56:35.541737 2278 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:56:35.541779 kubelet[2278]: I0412 18:56:35.541770 2278 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:56:35.541874 kubelet[2278]: I0412 18:56:35.541790 2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:56:35.542906 kubelet[2278]: W0412 18:56:35.542861 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.543057 kubelet[2278]: E0412 18:56:35.543045 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.543236 kubelet[2278]: W0412 18:56:35.543202 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-181&limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.543332 kubelet[2278]: E0412 18:56:35.543321 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-181&limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.543497 kubelet[2278]: I0412 18:56:35.543485 2278 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:56:35.543958 kubelet[2278]: I0412 18:56:35.543940 2278 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:56:35.544096 kubelet[2278]: W0412 18:56:35.544086 2278 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 12 18:56:35.545235 kubelet[2278]: I0412 18:56:35.545219 2278 server.go:1256] "Started kubelet" Apr 12 18:56:35.548961 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Apr 12 18:56:35.549211 kubelet[2278]: I0412 18:56:35.549191 2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:56:35.551317 kubelet[2278]: E0412 18:56:35.550619 2278 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.181:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.181:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-181.17c59d4eea1e1879 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-181,UID:ip-172-31-18-181,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-181,},FirstTimestamp:2024-04-12 18:56:35.545192569 +0000 UTC m=+0.583986938,LastTimestamp:2024-04-12 18:56:35.545192569 +0000 UTC m=+0.583986938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-181,}" Apr 12 18:56:35.552377 kubelet[2278]: I0412 18:56:35.552353 2278 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:56:35.553530 kubelet[2278]: I0412 18:56:35.553500 2278 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:56:35.554653 kubelet[2278]: I0412 18:56:35.554629 2278 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:56:35.555696 kubelet[2278]: I0412 18:56:35.555671 2278 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:56:35.556002 kubelet[2278]: I0412 18:56:35.555978 2278 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:56:35.558132 kubelet[2278]: E0412 18:56:35.558109 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-181?timeout=10s\": dial tcp 172.31.18.181:6443: connect: connection refused" interval="200ms" Apr 12 18:56:35.558664 kubelet[2278]: I0412 18:56:35.558532 2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:56:35.559020 kubelet[2278]: I0412 18:56:35.559004 2278 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:56:35.561389 kubelet[2278]: I0412 18:56:35.561361 2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:56:35.577006 kubelet[2278]: W0412 18:56:35.576939 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.577006 kubelet[2278]: E0412 18:56:35.577006 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.582509 kubelet[2278]: I0412 18:56:35.582324 2278 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:56:35.582509 kubelet[2278]: I0412 18:56:35.582348 2278 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:56:35.584923 kubelet[2278]: E0412 18:56:35.584896 2278 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:56:35.601800 kubelet[2278]: I0412 18:56:35.601774 2278 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:56:35.601800 kubelet[2278]: I0412 18:56:35.601799 2278 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:56:35.602011 kubelet[2278]: I0412 18:56:35.601834 2278 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:56:35.602908 kubelet[2278]: I0412 18:56:35.602888 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:56:35.604993 kubelet[2278]: I0412 18:56:35.604974 2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:56:35.605230 kubelet[2278]: I0412 18:56:35.605081 2278 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:56:35.605321 kubelet[2278]: I0412 18:56:35.605309 2278 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:56:35.605483 kubelet[2278]: E0412 18:56:35.605473 2278 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:56:35.606565 kubelet[2278]: W0412 18:56:35.606498 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.606724 kubelet[2278]: E0412 18:56:35.606710 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:35.608165 kubelet[2278]: I0412 18:56:35.608145 2278 policy_none.go:49] "None policy: Start" Apr 12 18:56:35.610202 kubelet[2278]: I0412 18:56:35.610177 2278 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:56:35.610275 kubelet[2278]: I0412 18:56:35.610217 2278 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:56:35.617740 systemd[1]: Created slice kubepods.slice. Apr 12 18:56:35.624691 systemd[1]: Created slice kubepods-besteffort.slice. Apr 12 18:56:35.633263 systemd[1]: Created slice kubepods-burstable.slice. Apr 12 18:56:35.636591 kubelet[2278]: I0412 18:56:35.635563 2278 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:56:35.636814 kubelet[2278]: I0412 18:56:35.636787 2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:56:35.638295 kubelet[2278]: E0412 18:56:35.638266 2278 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-181\" not found" Apr 12 18:56:35.657562 kubelet[2278]: I0412 18:56:35.657530 2278 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:35.658052 kubelet[2278]: E0412 18:56:35.658028 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.181:6443/api/v1/nodes\": dial tcp 172.31.18.181:6443: connect: connection refused" node="ip-172-31-18-181" Apr 12 18:56:35.706501 kubelet[2278]: I0412 18:56:35.706455 2278 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ee05f6a7f14d059dbe17480af69fe" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-181" Apr 12 18:56:35.708842 kubelet[2278]: I0412 18:56:35.708671 2278 topology_manager.go:215] "Topology Admit Handler" podUID="74f3df310a1f0ad4515c3cb0e9d02cdb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.713411 kubelet[2278]: I0412 18:56:35.713389 2278 topology_manager.go:215] "Topology Admit Handler" podUID="8c5d675069714552766529bc48c8d4a7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-181" Apr 12 18:56:35.738882 systemd[1]: Created slice kubepods-burstable-pod8a5ee05f6a7f14d059dbe17480af69fe.slice. Apr 12 18:56:35.759112 kubelet[2278]: I0412 18:56:35.758946 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-ca-certs\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:35.759321 kubelet[2278]: I0412 18:56:35.759135 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5d675069714552766529bc48c8d4a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-181\" (UID: \"8c5d675069714552766529bc48c8d4a7\") " pod="kube-system/kube-scheduler-ip-172-31-18-181" Apr 12 18:56:35.759321 kubelet[2278]: I0412 18:56:35.759212 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.759321 kubelet[2278]: I0412 18:56:35.759247 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.759321 kubelet[2278]: I0412 18:56:35.759279 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.759321 kubelet[2278]: I0412 18:56:35.759315 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.759552 kubelet[2278]: I0412 18:56:35.759345 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:35.759552 kubelet[2278]: I0412 18:56:35.759380 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:35.759552 kubelet[2278]: I0412 18:56:35.759417 2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:35.760735 kubelet[2278]: E0412 18:56:35.760706 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-181?timeout=10s\": dial tcp 172.31.18.181:6443: connect: connection refused" interval="400ms" Apr 12 18:56:35.770662 systemd[1]: Created slice kubepods-burstable-pod8c5d675069714552766529bc48c8d4a7.slice. Apr 12 18:56:35.777382 systemd[1]: Created slice kubepods-burstable-pod74f3df310a1f0ad4515c3cb0e9d02cdb.slice. Apr 12 18:56:35.860433 kubelet[2278]: I0412 18:56:35.860401 2278 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:35.860825 kubelet[2278]: E0412 18:56:35.860800 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.181:6443/api/v1/nodes\": dial tcp 172.31.18.181:6443: connect: connection refused" node="ip-172-31-18-181" Apr 12 18:56:36.064056 env[1645]: time="2024-04-12T18:56:36.063929568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-181,Uid:8a5ee05f6a7f14d059dbe17480af69fe,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:36.076777 env[1645]: time="2024-04-12T18:56:36.076736758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-181,Uid:8c5d675069714552766529bc48c8d4a7,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:36.088637 env[1645]: time="2024-04-12T18:56:36.088515898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-181,Uid:74f3df310a1f0ad4515c3cb0e9d02cdb,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:36.161654 kubelet[2278]: E0412 18:56:36.161619 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-181?timeout=10s\": dial tcp 172.31.18.181:6443: connect: connection refused" interval="800ms" Apr 12 18:56:36.262341 kubelet[2278]: I0412 18:56:36.262316 2278 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:36.262807 kubelet[2278]: E0412 18:56:36.262781 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.181:6443/api/v1/nodes\": dial tcp 172.31.18.181:6443: connect: connection refused" node="ip-172-31-18-181" Apr 12 18:56:36.559014 kubelet[2278]: W0412 18:56:36.558950 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.18.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:36.559014 kubelet[2278]: E0412 18:56:36.559018 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.18.181:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:36.608212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154212627.mount: Deactivated successfully. Apr 12 18:56:36.622711 env[1645]: time="2024-04-12T18:56:36.622660447Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.625564 env[1645]: time="2024-04-12T18:56:36.625513265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.630597 env[1645]: time="2024-04-12T18:56:36.630529378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.632826 env[1645]: time="2024-04-12T18:56:36.632779840Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.634508 env[1645]: time="2024-04-12T18:56:36.634399738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.635599 env[1645]: time="2024-04-12T18:56:36.635546113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.637470 env[1645]: time="2024-04-12T18:56:36.637441871Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.640776 env[1645]: time="2024-04-12T18:56:36.640734896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.642734 env[1645]: time="2024-04-12T18:56:36.642689169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.645582 env[1645]: time="2024-04-12T18:56:36.645525945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.647194 env[1645]: time="2024-04-12T18:56:36.647145910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.651638 env[1645]: time="2024-04-12T18:56:36.651596138Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:56:36.725044 env[1645]: time="2024-04-12T18:56:36.711601591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:36.725044 env[1645]: time="2024-04-12T18:56:36.711665079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:36.725044 env[1645]: time="2024-04-12T18:56:36.711681061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:36.725044 env[1645]: time="2024-04-12T18:56:36.711949639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/750958d55bfab78e9f63fff4455dd4e5066a51c576362a908696795853e8f5b3 pid=2317 runtime=io.containerd.runc.v2 Apr 12 18:56:36.734069 systemd[1]: Started cri-containerd-750958d55bfab78e9f63fff4455dd4e5066a51c576362a908696795853e8f5b3.scope. Apr 12 18:56:36.769155 env[1645]: time="2024-04-12T18:56:36.769054573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:36.769391 env[1645]: time="2024-04-12T18:56:36.769359267Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:36.769518 env[1645]: time="2024-04-12T18:56:36.769492020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:36.769918 env[1645]: time="2024-04-12T18:56:36.769877996Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ae7b8121f6a67fb7dfe3eb27946d8c4767563226d582bb43cf2ae070c84bfce6 pid=2349 runtime=io.containerd.runc.v2 Apr 12 18:56:36.777863 env[1645]: time="2024-04-12T18:56:36.777740112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:36.778066 env[1645]: time="2024-04-12T18:56:36.777841405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:36.778066 env[1645]: time="2024-04-12T18:56:36.777857945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:36.778214 env[1645]: time="2024-04-12T18:56:36.778123463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de0fbcced42a037b02e6d625f7dfb1114dbeadb226a08f5d102e924c78f8839d pid=2363 runtime=io.containerd.runc.v2 Apr 12 18:56:36.800232 systemd[1]: Started cri-containerd-ae7b8121f6a67fb7dfe3eb27946d8c4767563226d582bb43cf2ae070c84bfce6.scope. Apr 12 18:56:36.835442 systemd[1]: Started cri-containerd-de0fbcced42a037b02e6d625f7dfb1114dbeadb226a08f5d102e924c78f8839d.scope. Apr 12 18:56:36.855703 kubelet[2278]: W0412 18:56:36.855593 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.18.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-181&limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:36.855703 kubelet[2278]: E0412 18:56:36.855670 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.18.181:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-181&limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:36.866512 env[1645]: time="2024-04-12T18:56:36.866464298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-181,Uid:8a5ee05f6a7f14d059dbe17480af69fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"750958d55bfab78e9f63fff4455dd4e5066a51c576362a908696795853e8f5b3\"" Apr 12 18:56:36.874854 env[1645]: time="2024-04-12T18:56:36.874810101Z" level=info msg="CreateContainer within sandbox \"750958d55bfab78e9f63fff4455dd4e5066a51c576362a908696795853e8f5b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 12 18:56:36.908360 env[1645]: time="2024-04-12T18:56:36.908316174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-181,Uid:8c5d675069714552766529bc48c8d4a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae7b8121f6a67fb7dfe3eb27946d8c4767563226d582bb43cf2ae070c84bfce6\"" Apr 12 18:56:36.910268 env[1645]: time="2024-04-12T18:56:36.910228560Z" level=info msg="CreateContainer within sandbox \"750958d55bfab78e9f63fff4455dd4e5066a51c576362a908696795853e8f5b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"42fe9fc32ce1670228faedc5765da862690e66693931497dced672fae20ad7d0\"" Apr 12 18:56:36.911829 env[1645]: time="2024-04-12T18:56:36.911773192Z" level=info msg="StartContainer for \"42fe9fc32ce1670228faedc5765da862690e66693931497dced672fae20ad7d0\"" Apr 12 18:56:36.915349 env[1645]: time="2024-04-12T18:56:36.915315955Z" level=info msg="CreateContainer within sandbox \"ae7b8121f6a67fb7dfe3eb27946d8c4767563226d582bb43cf2ae070c84bfce6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 12 18:56:36.945459 systemd[1]: Started cri-containerd-42fe9fc32ce1670228faedc5765da862690e66693931497dced672fae20ad7d0.scope. Apr 12 18:56:36.960852 env[1645]: time="2024-04-12T18:56:36.960808612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-181,Uid:74f3df310a1f0ad4515c3cb0e9d02cdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"de0fbcced42a037b02e6d625f7dfb1114dbeadb226a08f5d102e924c78f8839d\"" Apr 12 18:56:36.962237 kubelet[2278]: E0412 18:56:36.962193 2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.181:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-181?timeout=10s\": dial tcp 172.31.18.181:6443: connect: connection refused" interval="1.6s" Apr 12 18:56:36.962447 env[1645]: time="2024-04-12T18:56:36.961892794Z" level=info msg="CreateContainer within sandbox \"ae7b8121f6a67fb7dfe3eb27946d8c4767563226d582bb43cf2ae070c84bfce6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c98308af148ec7bd0f7b46129f187fe3c446b2a6919fc057276fb9569db7fff5\"" Apr 12 18:56:36.963184 env[1645]: time="2024-04-12T18:56:36.963155916Z" level=info msg="StartContainer for \"c98308af148ec7bd0f7b46129f187fe3c446b2a6919fc057276fb9569db7fff5\"" Apr 12 18:56:36.966868 env[1645]: time="2024-04-12T18:56:36.966830235Z" level=info msg="CreateContainer within sandbox \"de0fbcced42a037b02e6d625f7dfb1114dbeadb226a08f5d102e924c78f8839d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 12 18:56:36.990067 env[1645]: time="2024-04-12T18:56:36.990014889Z" level=info msg="CreateContainer within sandbox \"de0fbcced42a037b02e6d625f7dfb1114dbeadb226a08f5d102e924c78f8839d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d220a0ef6909ded9966c32646eb4199f734e3a2db307b6833224db9cfb3baf6\"" Apr 12 18:56:36.990918 env[1645]: time="2024-04-12T18:56:36.990886757Z" level=info msg="StartContainer for \"4d220a0ef6909ded9966c32646eb4199f734e3a2db307b6833224db9cfb3baf6\"" Apr 12 18:56:37.006837 systemd[1]: Started cri-containerd-c98308af148ec7bd0f7b46129f187fe3c446b2a6919fc057276fb9569db7fff5.scope. Apr 12 18:56:37.067233 kubelet[2278]: I0412 18:56:37.066388 2278 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:37.067378 kubelet[2278]: E0412 18:56:37.067303 2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.18.181:6443/api/v1/nodes\": dial tcp 172.31.18.181:6443: connect: connection refused" node="ip-172-31-18-181" Apr 12 18:56:37.067436 kubelet[2278]: W0412 18:56:37.067381 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.18.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:37.067436 kubelet[2278]: E0412 18:56:37.067413 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.18.181:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:37.067726 systemd[1]: Started cri-containerd-4d220a0ef6909ded9966c32646eb4199f734e3a2db307b6833224db9cfb3baf6.scope. Apr 12 18:56:37.096780 env[1645]: time="2024-04-12T18:56:37.096652817Z" level=info msg="StartContainer for \"42fe9fc32ce1670228faedc5765da862690e66693931497dced672fae20ad7d0\" returns successfully" Apr 12 18:56:37.123548 kubelet[2278]: W0412 18:56:37.123466 2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.18.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:37.123548 kubelet[2278]: E0412 18:56:37.123515 2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.18.181:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:37.211036 env[1645]: time="2024-04-12T18:56:37.210973661Z" level=info msg="StartContainer for \"4d220a0ef6909ded9966c32646eb4199f734e3a2db307b6833224db9cfb3baf6\" returns successfully" Apr 12 18:56:37.212466 env[1645]: time="2024-04-12T18:56:37.212423162Z" level=info msg="StartContainer for \"c98308af148ec7bd0f7b46129f187fe3c446b2a6919fc057276fb9569db7fff5\" returns successfully" Apr 12 18:56:37.546839 kubelet[2278]: E0412 18:56:37.546809 2278 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.18.181:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.18.181:6443: connect: connection refused Apr 12 18:56:38.669564 kubelet[2278]: I0412 18:56:38.669540 2278 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:40.451548 kubelet[2278]: E0412 18:56:40.451510 2278 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-181\" not found" node="ip-172-31-18-181" Apr 12 18:56:40.546085 kubelet[2278]: I0412 18:56:40.546011 2278 apiserver.go:52] "Watching apiserver" Apr 12 18:56:40.556436 kubelet[2278]: I0412 18:56:40.556402 2278 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:56:40.556791 kubelet[2278]: I0412 18:56:40.556773 2278 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-181" Apr 12 18:56:43.609694 systemd[1]: Reloading. Apr 12 18:56:43.773043 /usr/lib/systemd/system-generators/torcx-generator[2567]: time="2024-04-12T18:56:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.3 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.3 /var/lib/torcx/store]" Apr 12 18:56:43.773084 /usr/lib/systemd/system-generators/torcx-generator[2567]: time="2024-04-12T18:56:43Z" level=info msg="torcx already run" Apr 12 18:56:43.855603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Apr 12 18:56:43.855628 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Apr 12 18:56:43.881363 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 12 18:56:44.110893 kubelet[2278]: I0412 18:56:44.110858 2278 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:56:44.111496 systemd[1]: Stopping kubelet.service... Apr 12 18:56:44.132263 systemd[1]: kubelet.service: Deactivated successfully. Apr 12 18:56:44.132700 systemd[1]: Stopped kubelet.service. Apr 12 18:56:44.136001 systemd[1]: Started kubelet.service. Apr 12 18:56:44.292269 kubelet[2616]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:56:44.293042 kubelet[2616]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 12 18:56:44.293148 kubelet[2616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 12 18:56:44.293360 kubelet[2616]: I0412 18:56:44.293322 2616 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 12 18:56:44.303043 kubelet[2616]: I0412 18:56:44.303014 2616 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Apr 12 18:56:44.303201 kubelet[2616]: I0412 18:56:44.303192 2616 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 12 18:56:44.303503 kubelet[2616]: I0412 18:56:44.303494 2616 server.go:919] "Client rotation is on, will bootstrap in background" Apr 12 18:56:44.305126 kubelet[2616]: I0412 18:56:44.305104 2616 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 12 18:56:44.307741 kubelet[2616]: I0412 18:56:44.307719 2616 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 12 18:56:44.315943 kubelet[2616]: I0412 18:56:44.315912 2616 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 12 18:56:44.316524 kubelet[2616]: I0412 18:56:44.316503 2616 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 12 18:56:44.317283 kubelet[2616]: I0412 18:56:44.317250 2616 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 12 18:56:44.317475 kubelet[2616]: I0412 18:56:44.317455 2616 topology_manager.go:138] "Creating topology manager with none policy" Apr 12 18:56:44.317554 kubelet[2616]: I0412 18:56:44.317544 2616 container_manager_linux.go:301] "Creating device plugin manager" Apr 12 18:56:44.317722 kubelet[2616]: I0412 18:56:44.317710 2616 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:56:44.318062 kubelet[2616]: I0412 18:56:44.318048 2616 kubelet.go:396] "Attempting to sync node with API server" Apr 12 18:56:44.318176 kubelet[2616]: I0412 18:56:44.318166 2616 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 12 18:56:44.318319 kubelet[2616]: I0412 18:56:44.318308 2616 kubelet.go:312] "Adding apiserver pod source" Apr 12 18:56:44.318414 kubelet[2616]: I0412 18:56:44.318405 2616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 12 18:56:44.323502 kubelet[2616]: I0412 18:56:44.323479 2616 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Apr 12 18:56:44.324966 kubelet[2616]: I0412 18:56:44.324946 2616 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 12 18:56:44.326927 sudo[2628]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 12 18:56:44.327233 sudo[2628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Apr 12 18:56:44.329686 kubelet[2616]: I0412 18:56:44.329667 2616 server.go:1256] "Started kubelet" Apr 12 18:56:44.349079 kubelet[2616]: I0412 18:56:44.349008 2616 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Apr 12 18:56:44.349244 kubelet[2616]: I0412 18:56:44.349150 2616 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 12 18:56:44.349596 kubelet[2616]: I0412 18:56:44.349532 2616 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 12 18:56:44.368053 kubelet[2616]: I0412 18:56:44.367170 2616 server.go:461] "Adding debug handlers to kubelet server" Apr 12 18:56:44.368427 kubelet[2616]: I0412 18:56:44.368409 2616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 12 18:56:44.397878 kubelet[2616]: I0412 18:56:44.397788 2616 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 12 18:56:44.408927 kubelet[2616]: I0412 18:56:44.397900 2616 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Apr 12 18:56:44.408927 kubelet[2616]: I0412 18:56:44.408686 2616 reconciler_new.go:29] "Reconciler: start to sync state" Apr 12 18:56:44.415672 kubelet[2616]: I0412 18:56:44.415225 2616 factory.go:221] Registration of the systemd container factory successfully Apr 12 18:56:44.415672 kubelet[2616]: I0412 18:56:44.415403 2616 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 12 18:56:44.417768 kubelet[2616]: E0412 18:56:44.417747 2616 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 12 18:56:44.418798 kubelet[2616]: I0412 18:56:44.418780 2616 factory.go:221] Registration of the containerd container factory successfully Apr 12 18:56:44.469610 kubelet[2616]: I0412 18:56:44.466772 2616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 12 18:56:44.477075 kubelet[2616]: I0412 18:56:44.475112 2616 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 12 18:56:44.477290 kubelet[2616]: I0412 18:56:44.477271 2616 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 12 18:56:44.478306 kubelet[2616]: I0412 18:56:44.478287 2616 kubelet.go:2329] "Starting kubelet main sync loop" Apr 12 18:56:44.478482 kubelet[2616]: E0412 18:56:44.478470 2616 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 12 18:56:44.520015 kubelet[2616]: I0412 18:56:44.519989 2616 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-18-181" Apr 12 18:56:44.540288 kubelet[2616]: I0412 18:56:44.540254 2616 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-18-181" Apr 12 18:56:44.540446 kubelet[2616]: I0412 18:56:44.540343 2616 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-18-181" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574659 2616 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574685 2616 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574713 2616 state_mem.go:36] "Initialized new in-memory state store" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574946 2616 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574973 2616 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.574988 2616 policy_none.go:49] "None policy: Start" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.576118 2616 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 12 18:56:44.576385 kubelet[2616]: I0412 18:56:44.576144 2616 state_mem.go:35] "Initializing new in-memory state store" Apr 12 18:56:44.576934 kubelet[2616]: I0412 18:56:44.576417 2616 state_mem.go:75] "Updated machine memory state" Apr 12 18:56:44.582416 kubelet[2616]: E0412 18:56:44.578793 2616 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 12 18:56:44.584685 kubelet[2616]: I0412 18:56:44.584196 2616 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 12 18:56:44.587380 kubelet[2616]: I0412 18:56:44.587354 2616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 12 18:56:44.653653 update_engine[1637]: I0412 18:56:44.653524 1637 update_attempter.cc:509] Updating boot flags... Apr 12 18:56:44.784939 kubelet[2616]: I0412 18:56:44.784899 2616 topology_manager.go:215] "Topology Admit Handler" podUID="8c5d675069714552766529bc48c8d4a7" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-18-181" Apr 12 18:56:44.785091 kubelet[2616]: I0412 18:56:44.785047 2616 topology_manager.go:215] "Topology Admit Handler" podUID="8a5ee05f6a7f14d059dbe17480af69fe" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-18-181" Apr 12 18:56:44.785148 kubelet[2616]: I0412 18:56:44.785125 2616 topology_manager.go:215] "Topology Admit Handler" podUID="74f3df310a1f0ad4515c3cb0e9d02cdb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:44.814727 kubelet[2616]: I0412 18:56:44.814320 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:44.814727 kubelet[2616]: I0412 18:56:44.814373 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:44.814727 kubelet[2616]: I0412 18:56:44.814406 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:44.815913 kubelet[2616]: I0412 18:56:44.814435 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:44.815913 kubelet[2616]: I0412 18:56:44.815419 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c5d675069714552766529bc48c8d4a7-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-181\" (UID: \"8c5d675069714552766529bc48c8d4a7\") " pod="kube-system/kube-scheduler-ip-172-31-18-181" Apr 12 18:56:44.815913 kubelet[2616]: I0412 18:56:44.815520 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-ca-certs\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:44.817682 kubelet[2616]: I0412 18:56:44.816681 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a5ee05f6a7f14d059dbe17480af69fe-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-181\" (UID: \"8a5ee05f6a7f14d059dbe17480af69fe\") " pod="kube-system/kube-apiserver-ip-172-31-18-181" Apr 12 18:56:44.817682 kubelet[2616]: I0412 18:56:44.816744 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:44.817682 kubelet[2616]: I0412 18:56:44.816786 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74f3df310a1f0ad4515c3cb0e9d02cdb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-181\" (UID: \"74f3df310a1f0ad4515c3cb0e9d02cdb\") " pod="kube-system/kube-controller-manager-ip-172-31-18-181" Apr 12 18:56:45.342042 kubelet[2616]: I0412 18:56:45.342004 2616 apiserver.go:52] "Watching apiserver" Apr 12 18:56:45.408383 kubelet[2616]: I0412 18:56:45.408344 2616 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Apr 12 18:56:45.479285 sudo[2628]: pam_unix(sudo:session): session closed for user root Apr 12 18:56:45.576314 kubelet[2616]: I0412 18:56:45.576282 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-181" podStartSLOduration=1.5761014599999998 podStartE2EDuration="1.57610146s" podCreationTimestamp="2024-04-12 18:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:56:45.562172381 +0000 UTC m=+1.416802999" watchObservedRunningTime="2024-04-12 18:56:45.57610146 +0000 UTC m=+1.430732067" Apr 12 18:56:45.589988 kubelet[2616]: I0412 18:56:45.589942 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-181" podStartSLOduration=1.589890686 podStartE2EDuration="1.589890686s" podCreationTimestamp="2024-04-12 18:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:56:45.577125694 +0000 UTC m=+1.431756308" watchObservedRunningTime="2024-04-12 18:56:45.589890686 +0000 UTC m=+1.444521303" Apr 12 18:56:45.613235 kubelet[2616]: I0412 18:56:45.613122 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-181" podStartSLOduration=1.612849755 podStartE2EDuration="1.612849755s" podCreationTimestamp="2024-04-12 18:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:56:45.590775011 +0000 UTC m=+1.445405628" watchObservedRunningTime="2024-04-12 18:56:45.612849755 +0000 UTC m=+1.467480375" Apr 12 18:56:47.297456 sudo[1872]: pam_unix(sudo:session): session closed for user root Apr 12 18:56:47.320779 sshd[1869]: pam_unix(sshd:session): session closed for user core Apr 12 18:56:47.325093 systemd-logind[1636]: Session 5 logged out. Waiting for processes to exit. Apr 12 18:56:47.325342 systemd[1]: sshd@4-172.31.18.181:22-147.75.109.163:55716.service: Deactivated successfully. Apr 12 18:56:47.326522 systemd[1]: session-5.scope: Deactivated successfully. Apr 12 18:56:47.326722 systemd[1]: session-5.scope: Consumed 4.549s CPU time. Apr 12 18:56:47.327667 systemd-logind[1636]: Removed session 5. Apr 12 18:56:54.097405 amazon-ssm-agent[1623]: 2024-04-12 18:56:54 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Apr 12 18:56:56.258718 kubelet[2616]: I0412 18:56:56.258690 2616 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 12 18:56:56.261563 env[1645]: time="2024-04-12T18:56:56.261509330Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 12 18:56:56.265943 kubelet[2616]: I0412 18:56:56.265921 2616 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 12 18:56:57.244052 kubelet[2616]: I0412 18:56:57.244008 2616 topology_manager.go:215] "Topology Admit Handler" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" podNamespace="kube-system" podName="cilium-srslp" Apr 12 18:56:57.244637 kubelet[2616]: I0412 18:56:57.244289 2616 topology_manager.go:215] "Topology Admit Handler" podUID="396ea03b-d880-4fc2-bd7e-210a1f997b2e" podNamespace="kube-system" podName="kube-proxy-d64r8" Apr 12 18:56:57.254311 systemd[1]: Created slice kubepods-besteffort-pod396ea03b_d880_4fc2_bd7e_210a1f997b2e.slice. Apr 12 18:56:57.265490 systemd[1]: Created slice kubepods-burstable-pod3ea48d00_dae2_491c_8e54_adcc87ea9bef.slice. Apr 12 18:56:57.321787 kubelet[2616]: I0412 18:56:57.321750 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-lib-modules\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321811 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-bpf-maps\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321844 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-run\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321872 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-net\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321915 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-kernel\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321962 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-cgroup\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322613 kubelet[2616]: I0412 18:56:57.321990 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-xtables-lock\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322891 kubelet[2616]: I0412 18:56:57.322020 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ea48d00-dae2-491c-8e54-adcc87ea9bef-clustermesh-secrets\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322891 kubelet[2616]: I0412 18:56:57.322160 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9k78\" (UniqueName: \"kubernetes.io/projected/396ea03b-d880-4fc2-bd7e-210a1f997b2e-kube-api-access-d9k78\") pod \"kube-proxy-d64r8\" (UID: \"396ea03b-d880-4fc2-bd7e-210a1f997b2e\") " pod="kube-system/kube-proxy-d64r8" Apr 12 18:56:57.322891 kubelet[2616]: I0412 18:56:57.322438 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-etc-cni-netd\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322891 kubelet[2616]: I0412 18:56:57.322481 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-config-path\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.322891 kubelet[2616]: I0412 18:56:57.322509 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjp6g\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-kube-api-access-xjp6g\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322535 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/396ea03b-d880-4fc2-bd7e-210a1f997b2e-kube-proxy\") pod \"kube-proxy-d64r8\" (UID: \"396ea03b-d880-4fc2-bd7e-210a1f997b2e\") " pod="kube-system/kube-proxy-d64r8" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322562 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hostproc\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322605 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/396ea03b-d880-4fc2-bd7e-210a1f997b2e-xtables-lock\") pod \"kube-proxy-d64r8\" (UID: \"396ea03b-d880-4fc2-bd7e-210a1f997b2e\") " pod="kube-system/kube-proxy-d64r8" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322633 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cni-path\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322664 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hubble-tls\") pod \"cilium-srslp\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " pod="kube-system/cilium-srslp" Apr 12 18:56:57.323099 kubelet[2616]: I0412 18:56:57.322697 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/396ea03b-d880-4fc2-bd7e-210a1f997b2e-lib-modules\") pod \"kube-proxy-d64r8\" (UID: \"396ea03b-d880-4fc2-bd7e-210a1f997b2e\") " pod="kube-system/kube-proxy-d64r8" Apr 12 18:56:57.493662 kubelet[2616]: I0412 18:56:57.493556 2616 topology_manager.go:215] "Topology Admit Handler" podUID="20519993-eed6-4b35-a793-45e8c3bf50e1" podNamespace="kube-system" podName="cilium-operator-5cc964979-v6vxr" Apr 12 18:56:57.508977 systemd[1]: Created slice kubepods-besteffort-pod20519993_eed6_4b35_a793_45e8c3bf50e1.slice. Apr 12 18:56:57.564409 env[1645]: time="2024-04-12T18:56:57.564031572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d64r8,Uid:396ea03b-d880-4fc2-bd7e-210a1f997b2e,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:57.569915 env[1645]: time="2024-04-12T18:56:57.569875940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srslp,Uid:3ea48d00-dae2-491c-8e54-adcc87ea9bef,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:57.592729 env[1645]: time="2024-04-12T18:56:57.592643943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:57.592729 env[1645]: time="2024-04-12T18:56:57.592687080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:57.592729 env[1645]: time="2024-04-12T18:56:57.592702168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:57.593392 env[1645]: time="2024-04-12T18:56:57.593202338Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0ff9b620198f7dd10588febbcdb00d2a5a0c0ea10fdcf2b82fed5fc06d80212 pid=2791 runtime=io.containerd.runc.v2 Apr 12 18:56:57.610311 systemd[1]: Started cri-containerd-f0ff9b620198f7dd10588febbcdb00d2a5a0c0ea10fdcf2b82fed5fc06d80212.scope. Apr 12 18:56:57.641113 kubelet[2616]: I0412 18:56:57.641079 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20519993-eed6-4b35-a793-45e8c3bf50e1-cilium-config-path\") pod \"cilium-operator-5cc964979-v6vxr\" (UID: \"20519993-eed6-4b35-a793-45e8c3bf50e1\") " pod="kube-system/cilium-operator-5cc964979-v6vxr" Apr 12 18:56:57.641433 kubelet[2616]: I0412 18:56:57.641418 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7j7w\" (UniqueName: \"kubernetes.io/projected/20519993-eed6-4b35-a793-45e8c3bf50e1-kube-api-access-g7j7w\") pod \"cilium-operator-5cc964979-v6vxr\" (UID: \"20519993-eed6-4b35-a793-45e8c3bf50e1\") " pod="kube-system/cilium-operator-5cc964979-v6vxr" Apr 12 18:56:57.651983 env[1645]: time="2024-04-12T18:56:57.651892456Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:57.653359 env[1645]: time="2024-04-12T18:56:57.653311875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:57.653649 env[1645]: time="2024-04-12T18:56:57.653534062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:57.654652 env[1645]: time="2024-04-12T18:56:57.654536908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127 pid=2819 runtime=io.containerd.runc.v2 Apr 12 18:56:57.682357 systemd[1]: Started cri-containerd-0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127.scope. Apr 12 18:56:57.706337 env[1645]: time="2024-04-12T18:56:57.706292506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d64r8,Uid:396ea03b-d880-4fc2-bd7e-210a1f997b2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ff9b620198f7dd10588febbcdb00d2a5a0c0ea10fdcf2b82fed5fc06d80212\"" Apr 12 18:56:57.710041 env[1645]: time="2024-04-12T18:56:57.710000318Z" level=info msg="CreateContainer within sandbox \"f0ff9b620198f7dd10588febbcdb00d2a5a0c0ea10fdcf2b82fed5fc06d80212\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 12 18:56:57.728410 env[1645]: time="2024-04-12T18:56:57.728357442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-srslp,Uid:3ea48d00-dae2-491c-8e54-adcc87ea9bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\"" Apr 12 18:56:57.730788 env[1645]: time="2024-04-12T18:56:57.730485954Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 12 18:56:57.760838 env[1645]: time="2024-04-12T18:56:57.760745393Z" level=info msg="CreateContainer within sandbox \"f0ff9b620198f7dd10588febbcdb00d2a5a0c0ea10fdcf2b82fed5fc06d80212\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc7fb3ea3e6444e92dc8e1df720cf01ac60d43452757d30ac804dd0ae2bf255f\"" Apr 12 18:56:57.762550 env[1645]: time="2024-04-12T18:56:57.762515342Z" level=info msg="StartContainer for \"cc7fb3ea3e6444e92dc8e1df720cf01ac60d43452757d30ac804dd0ae2bf255f\"" Apr 12 18:56:57.785287 systemd[1]: Started cri-containerd-cc7fb3ea3e6444e92dc8e1df720cf01ac60d43452757d30ac804dd0ae2bf255f.scope. Apr 12 18:56:57.815354 env[1645]: time="2024-04-12T18:56:57.815252333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v6vxr,Uid:20519993-eed6-4b35-a793-45e8c3bf50e1,Namespace:kube-system,Attempt:0,}" Apr 12 18:56:57.837827 env[1645]: time="2024-04-12T18:56:57.837771838Z" level=info msg="StartContainer for \"cc7fb3ea3e6444e92dc8e1df720cf01ac60d43452757d30ac804dd0ae2bf255f\" returns successfully" Apr 12 18:56:57.860720 env[1645]: time="2024-04-12T18:56:57.860600016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:56:57.860720 env[1645]: time="2024-04-12T18:56:57.860655416Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:56:57.860720 env[1645]: time="2024-04-12T18:56:57.860669929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:56:57.861151 env[1645]: time="2024-04-12T18:56:57.861094828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50 pid=2915 runtime=io.containerd.runc.v2 Apr 12 18:56:57.888462 systemd[1]: Started cri-containerd-6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50.scope. Apr 12 18:56:57.955361 env[1645]: time="2024-04-12T18:56:57.955315223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-v6vxr,Uid:20519993-eed6-4b35-a793-45e8c3bf50e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\"" Apr 12 18:56:58.596725 kubelet[2616]: I0412 18:56:58.596615 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-d64r8" podStartSLOduration=1.596519526 podStartE2EDuration="1.596519526s" podCreationTimestamp="2024-04-12 18:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:56:58.596116689 +0000 UTC m=+14.450747308" watchObservedRunningTime="2024-04-12 18:56:58.596519526 +0000 UTC m=+14.451150144" Apr 12 18:57:08.324507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1395373482.mount: Deactivated successfully. Apr 12 18:57:11.419191 amazon-ssm-agent[1623]: 2024-04-12 18:57:11 INFO [HealthCheck] HealthCheck reporting agent health. Apr 12 18:57:12.578301 env[1645]: time="2024-04-12T18:57:12.578243908Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:12.592286 env[1645]: time="2024-04-12T18:57:12.592200370Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:12.595156 env[1645]: time="2024-04-12T18:57:12.595108757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:12.596146 env[1645]: time="2024-04-12T18:57:12.596101706Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 12 18:57:12.599537 env[1645]: time="2024-04-12T18:57:12.597972724Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 12 18:57:12.601555 env[1645]: time="2024-04-12T18:57:12.601396932Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:57:12.623523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943329892.mount: Deactivated successfully. Apr 12 18:57:12.635358 env[1645]: time="2024-04-12T18:57:12.635301140Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\"" Apr 12 18:57:12.636719 env[1645]: time="2024-04-12T18:57:12.636685723Z" level=info msg="StartContainer for \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\"" Apr 12 18:57:12.675095 systemd[1]: Started cri-containerd-e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6.scope. Apr 12 18:57:12.739123 env[1645]: time="2024-04-12T18:57:12.738960307Z" level=info msg="StartContainer for \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\" returns successfully" Apr 12 18:57:12.763183 systemd[1]: cri-containerd-e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6.scope: Deactivated successfully. Apr 12 18:57:12.951630 env[1645]: time="2024-04-12T18:57:12.950318828Z" level=info msg="shim disconnected" id=e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6 Apr 12 18:57:12.951630 env[1645]: time="2024-04-12T18:57:12.950473392Z" level=warning msg="cleaning up after shim disconnected" id=e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6 namespace=k8s.io Apr 12 18:57:12.951630 env[1645]: time="2024-04-12T18:57:12.950491682Z" level=info msg="cleaning up dead shim" Apr 12 18:57:12.964034 env[1645]: time="2024-04-12T18:57:12.963984439Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:57:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3123 runtime=io.containerd.runc.v2\n" Apr 12 18:57:13.622190 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6-rootfs.mount: Deactivated successfully. Apr 12 18:57:13.656733 env[1645]: time="2024-04-12T18:57:13.656678017Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:57:13.698207 env[1645]: time="2024-04-12T18:57:13.698124796Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\"" Apr 12 18:57:13.701874 env[1645]: time="2024-04-12T18:57:13.701827837Z" level=info msg="StartContainer for \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\"" Apr 12 18:57:13.736561 systemd[1]: Started cri-containerd-76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270.scope. Apr 12 18:57:13.798394 env[1645]: time="2024-04-12T18:57:13.798343548Z" level=info msg="StartContainer for \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\" returns successfully" Apr 12 18:57:13.827207 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 12 18:57:13.828174 systemd[1]: Stopped systemd-sysctl.service. Apr 12 18:57:13.829058 systemd[1]: Stopping systemd-sysctl.service... Apr 12 18:57:13.831498 systemd[1]: Starting systemd-sysctl.service... Apr 12 18:57:13.836553 systemd[1]: cri-containerd-76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270.scope: Deactivated successfully. Apr 12 18:57:13.868899 systemd[1]: Finished systemd-sysctl.service. Apr 12 18:57:13.900507 env[1645]: time="2024-04-12T18:57:13.899642271Z" level=info msg="shim disconnected" id=76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270 Apr 12 18:57:13.900507 env[1645]: time="2024-04-12T18:57:13.899693541Z" level=warning msg="cleaning up after shim disconnected" id=76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270 namespace=k8s.io Apr 12 18:57:13.900507 env[1645]: time="2024-04-12T18:57:13.899707511Z" level=info msg="cleaning up dead shim" Apr 12 18:57:13.913015 env[1645]: time="2024-04-12T18:57:13.912963440Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:57:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3186 runtime=io.containerd.runc.v2\n" Apr 12 18:57:14.620242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270-rootfs.mount: Deactivated successfully. Apr 12 18:57:14.693265 env[1645]: time="2024-04-12T18:57:14.693217095Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:57:14.733460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115881513.mount: Deactivated successfully. Apr 12 18:57:14.756824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17030708.mount: Deactivated successfully. Apr 12 18:57:14.772254 env[1645]: time="2024-04-12T18:57:14.772175201Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\"" Apr 12 18:57:14.773445 env[1645]: time="2024-04-12T18:57:14.773410520Z" level=info msg="StartContainer for \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\"" Apr 12 18:57:14.820133 systemd[1]: Started cri-containerd-69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3.scope. Apr 12 18:57:14.927782 env[1645]: time="2024-04-12T18:57:14.927557105Z" level=info msg="StartContainer for \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\" returns successfully" Apr 12 18:57:14.932777 systemd[1]: cri-containerd-69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3.scope: Deactivated successfully. Apr 12 18:57:15.047257 env[1645]: time="2024-04-12T18:57:15.047210687Z" level=info msg="shim disconnected" id=69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3 Apr 12 18:57:15.047567 env[1645]: time="2024-04-12T18:57:15.047542724Z" level=warning msg="cleaning up after shim disconnected" id=69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3 namespace=k8s.io Apr 12 18:57:15.047674 env[1645]: time="2024-04-12T18:57:15.047657003Z" level=info msg="cleaning up dead shim" Apr 12 18:57:15.085199 env[1645]: time="2024-04-12T18:57:15.085159616Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:57:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3243 runtime=io.containerd.runc.v2\n" Apr 12 18:57:15.677323 env[1645]: time="2024-04-12T18:57:15.677140821Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:57:15.678172 env[1645]: time="2024-04-12T18:57:15.677988975Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:15.682684 env[1645]: time="2024-04-12T18:57:15.682648340Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:15.687030 env[1645]: time="2024-04-12T18:57:15.686986082Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 12 18:57:15.687943 env[1645]: time="2024-04-12T18:57:15.687911662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Apr 12 18:57:15.715007 env[1645]: time="2024-04-12T18:57:15.714959170Z" level=info msg="CreateContainer within sandbox \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 12 18:57:15.776030 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389489704.mount: Deactivated successfully. Apr 12 18:57:15.784249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount782633078.mount: Deactivated successfully. Apr 12 18:57:15.807931 env[1645]: time="2024-04-12T18:57:15.807857019Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\"" Apr 12 18:57:15.809109 env[1645]: time="2024-04-12T18:57:15.809069352Z" level=info msg="CreateContainer within sandbox \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\"" Apr 12 18:57:15.810327 env[1645]: time="2024-04-12T18:57:15.809816434Z" level=info msg="StartContainer for \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\"" Apr 12 18:57:15.811661 env[1645]: time="2024-04-12T18:57:15.811522066Z" level=info msg="StartContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\"" Apr 12 18:57:15.847964 systemd[1]: Started cri-containerd-2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90.scope. Apr 12 18:57:15.857167 systemd[1]: Started cri-containerd-f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529.scope. Apr 12 18:57:15.926897 systemd[1]: cri-containerd-f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529.scope: Deactivated successfully. Apr 12 18:57:15.932018 env[1645]: time="2024-04-12T18:57:15.931907372Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea48d00_dae2_491c_8e54_adcc87ea9bef.slice/cri-containerd-f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529.scope/memory.events\": no such file or directory" Apr 12 18:57:15.947184 env[1645]: time="2024-04-12T18:57:15.943161592Z" level=info msg="StartContainer for \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\" returns successfully" Apr 12 18:57:15.970932 env[1645]: time="2024-04-12T18:57:15.970868964Z" level=info msg="StartContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" returns successfully" Apr 12 18:57:16.001460 env[1645]: time="2024-04-12T18:57:16.001408530Z" level=info msg="shim disconnected" id=f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529 Apr 12 18:57:16.001460 env[1645]: time="2024-04-12T18:57:16.001460559Z" level=warning msg="cleaning up after shim disconnected" id=f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529 namespace=k8s.io Apr 12 18:57:16.001460 env[1645]: time="2024-04-12T18:57:16.001473349Z" level=info msg="cleaning up dead shim" Apr 12 18:57:16.020676 env[1645]: time="2024-04-12T18:57:16.020622119Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:57:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3335 runtime=io.containerd.runc.v2\n" Apr 12 18:57:16.694088 env[1645]: time="2024-04-12T18:57:16.694041123Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:57:16.739021 env[1645]: time="2024-04-12T18:57:16.738964745Z" level=info msg="CreateContainer within sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\"" Apr 12 18:57:16.739772 env[1645]: time="2024-04-12T18:57:16.739737632Z" level=info msg="StartContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\"" Apr 12 18:57:16.826468 systemd[1]: Started cri-containerd-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366.scope. Apr 12 18:57:16.927219 env[1645]: time="2024-04-12T18:57:16.927167816Z" level=info msg="StartContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" returns successfully" Apr 12 18:57:16.964401 kubelet[2616]: I0412 18:57:16.963370 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-v6vxr" podStartSLOduration=2.228462046 podStartE2EDuration="19.963298928s" podCreationTimestamp="2024-04-12 18:56:57 +0000 UTC" firstStartedPulling="2024-04-12 18:56:57.956747439 +0000 UTC m=+13.811378034" lastFinishedPulling="2024-04-12 18:57:15.69158431 +0000 UTC m=+31.546214916" observedRunningTime="2024-04-12 18:57:16.837834426 +0000 UTC m=+32.692465042" watchObservedRunningTime="2024-04-12 18:57:16.963298928 +0000 UTC m=+32.817929545" Apr 12 18:57:17.374042 kubelet[2616]: I0412 18:57:17.372377 2616 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 12 18:57:17.529629 kubelet[2616]: I0412 18:57:17.529562 2616 topology_manager.go:215] "Topology Admit Handler" podUID="c087baf4-492f-4b02-9df5-a6e609fa7bbb" podNamespace="kube-system" podName="coredns-76f75df574-dbnj5" Apr 12 18:57:17.538481 systemd[1]: Created slice kubepods-burstable-podc087baf4_492f_4b02_9df5_a6e609fa7bbb.slice. Apr 12 18:57:17.560695 kubelet[2616]: W0412 18:57:17.560650 2616 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-181" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-181' and this object Apr 12 18:57:17.561052 kubelet[2616]: E0412 18:57:17.561032 2616 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-18-181" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-18-181' and this object Apr 12 18:57:17.565586 kubelet[2616]: I0412 18:57:17.565540 2616 topology_manager.go:215] "Topology Admit Handler" podUID="8745d0c5-ba88-4e93-bc92-6552de23a7c2" podNamespace="kube-system" podName="coredns-76f75df574-kd49x" Apr 12 18:57:17.572515 systemd[1]: Created slice kubepods-burstable-pod8745d0c5_ba88_4e93_bc92_6552de23a7c2.slice. Apr 12 18:57:17.587543 kubelet[2616]: I0412 18:57:17.587499 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghjt8\" (UniqueName: \"kubernetes.io/projected/c087baf4-492f-4b02-9df5-a6e609fa7bbb-kube-api-access-ghjt8\") pod \"coredns-76f75df574-dbnj5\" (UID: \"c087baf4-492f-4b02-9df5-a6e609fa7bbb\") " pod="kube-system/coredns-76f75df574-dbnj5" Apr 12 18:57:17.587758 kubelet[2616]: I0412 18:57:17.587596 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c087baf4-492f-4b02-9df5-a6e609fa7bbb-config-volume\") pod \"coredns-76f75df574-dbnj5\" (UID: \"c087baf4-492f-4b02-9df5-a6e609fa7bbb\") " pod="kube-system/coredns-76f75df574-dbnj5" Apr 12 18:57:17.627585 systemd[1]: run-containerd-runc-k8s.io-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366-runc.2IEMsH.mount: Deactivated successfully. Apr 12 18:57:17.687908 kubelet[2616]: I0412 18:57:17.687884 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxfk\" (UniqueName: \"kubernetes.io/projected/8745d0c5-ba88-4e93-bc92-6552de23a7c2-kube-api-access-xnxfk\") pod \"coredns-76f75df574-kd49x\" (UID: \"8745d0c5-ba88-4e93-bc92-6552de23a7c2\") " pod="kube-system/coredns-76f75df574-kd49x" Apr 12 18:57:17.688726 kubelet[2616]: I0412 18:57:17.688708 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8745d0c5-ba88-4e93-bc92-6552de23a7c2-config-volume\") pod \"coredns-76f75df574-kd49x\" (UID: \"8745d0c5-ba88-4e93-bc92-6552de23a7c2\") " pod="kube-system/coredns-76f75df574-kd49x" Apr 12 18:57:18.689507 kubelet[2616]: E0412 18:57:18.689457 2616 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:57:18.690131 kubelet[2616]: E0412 18:57:18.689589 2616 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c087baf4-492f-4b02-9df5-a6e609fa7bbb-config-volume podName:c087baf4-492f-4b02-9df5-a6e609fa7bbb nodeName:}" failed. No retries permitted until 2024-04-12 18:57:19.189552981 +0000 UTC m=+35.044183589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c087baf4-492f-4b02-9df5-a6e609fa7bbb-config-volume") pod "coredns-76f75df574-dbnj5" (UID: "c087baf4-492f-4b02-9df5-a6e609fa7bbb") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:57:18.798436 kubelet[2616]: E0412 18:57:18.798390 2616 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Apr 12 18:57:18.798644 kubelet[2616]: E0412 18:57:18.798489 2616 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8745d0c5-ba88-4e93-bc92-6552de23a7c2-config-volume podName:8745d0c5-ba88-4e93-bc92-6552de23a7c2 nodeName:}" failed. No retries permitted until 2024-04-12 18:57:19.298467242 +0000 UTC m=+35.153097854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8745d0c5-ba88-4e93-bc92-6552de23a7c2-config-volume") pod "coredns-76f75df574-kd49x" (UID: "8745d0c5-ba88-4e93-bc92-6552de23a7c2") : failed to sync configmap cache: timed out waiting for the condition Apr 12 18:57:19.345387 env[1645]: time="2024-04-12T18:57:19.345342010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dbnj5,Uid:c087baf4-492f-4b02-9df5-a6e609fa7bbb,Namespace:kube-system,Attempt:0,}" Apr 12 18:57:19.382111 env[1645]: time="2024-04-12T18:57:19.382056069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kd49x,Uid:8745d0c5-ba88-4e93-bc92-6552de23a7c2,Namespace:kube-system,Attempt:0,}" Apr 12 18:57:20.233423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Apr 12 18:57:20.232947 (udev-worker)[3435]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:57:20.234110 (udev-worker)[3495]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:57:20.264410 systemd-networkd[1459]: cilium_host: Link UP Apr 12 18:57:20.267145 systemd-networkd[1459]: cilium_net: Link UP Apr 12 18:57:20.267207 systemd-networkd[1459]: cilium_net: Gained carrier Apr 12 18:57:20.267947 systemd-networkd[1459]: cilium_host: Gained carrier Apr 12 18:57:20.268449 systemd-networkd[1459]: cilium_host: Gained IPv6LL Apr 12 18:57:20.505093 systemd-networkd[1459]: cilium_vxlan: Link UP Apr 12 18:57:20.505103 systemd-networkd[1459]: cilium_vxlan: Gained carrier Apr 12 18:57:20.626357 systemd-networkd[1459]: cilium_net: Gained IPv6LL Apr 12 18:57:20.982403 kernel: NET: Registered PF_ALG protocol family Apr 12 18:57:22.169642 systemd-networkd[1459]: lxc_health: Link UP Apr 12 18:57:22.170351 (udev-worker)[3501]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:57:22.192902 systemd-networkd[1459]: lxc_health: Gained carrier Apr 12 18:57:22.193781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:57:22.511256 systemd-networkd[1459]: cilium_vxlan: Gained IPv6LL Apr 12 18:57:22.530903 systemd-networkd[1459]: lxc110519cfb57d: Link UP Apr 12 18:57:22.547621 kernel: eth0: renamed from tmp81993 Apr 12 18:57:22.550343 (udev-worker)[3500]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:57:22.558180 systemd-networkd[1459]: lxcec88fafdaf45: Link UP Apr 12 18:57:22.566051 systemd-networkd[1459]: lxc110519cfb57d: Gained carrier Apr 12 18:57:22.566594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc110519cfb57d: link becomes ready Apr 12 18:57:22.577644 kernel: eth0: renamed from tmpd63cb Apr 12 18:57:22.577765 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcec88fafdaf45: link becomes ready Apr 12 18:57:22.576015 systemd-networkd[1459]: lxcec88fafdaf45: Gained carrier Apr 12 18:57:23.607165 kubelet[2616]: I0412 18:57:23.607043 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-srslp" podStartSLOduration=11.740148689 podStartE2EDuration="26.606988294s" podCreationTimestamp="2024-04-12 18:56:57 +0000 UTC" firstStartedPulling="2024-04-12 18:56:57.729837845 +0000 UTC m=+13.584468443" lastFinishedPulling="2024-04-12 18:57:12.596677293 +0000 UTC m=+28.451308048" observedRunningTime="2024-04-12 18:57:17.902363305 +0000 UTC m=+33.756993922" watchObservedRunningTime="2024-04-12 18:57:23.606988294 +0000 UTC m=+39.461618908" Apr 12 18:57:23.697822 systemd-networkd[1459]: lxcec88fafdaf45: Gained IPv6LL Apr 12 18:57:23.889885 systemd-networkd[1459]: lxc_health: Gained IPv6LL Apr 12 18:57:24.273787 systemd-networkd[1459]: lxc110519cfb57d: Gained IPv6LL Apr 12 18:57:29.699153 env[1645]: time="2024-04-12T18:57:29.699001498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:57:29.699153 env[1645]: time="2024-04-12T18:57:29.699107862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:57:29.699751 env[1645]: time="2024-04-12T18:57:29.699311751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:57:29.699918 env[1645]: time="2024-04-12T18:57:29.699873212Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d63cbfda66d4c98d51e640f14061fb4740e2fef463ff47fd8f01cf3d72a6486e pid=3878 runtime=io.containerd.runc.v2 Apr 12 18:57:29.719939 env[1645]: time="2024-04-12T18:57:29.719671365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:57:29.720126 env[1645]: time="2024-04-12T18:57:29.719969761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:57:29.720126 env[1645]: time="2024-04-12T18:57:29.720007265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:57:29.720494 env[1645]: time="2024-04-12T18:57:29.720438089Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8199384ea2ee372af839f12cab746ea7c5c4b9d54bbf1a5c0dab1e638aa6d46a pid=3880 runtime=io.containerd.runc.v2 Apr 12 18:57:29.790893 systemd[1]: Started cri-containerd-8199384ea2ee372af839f12cab746ea7c5c4b9d54bbf1a5c0dab1e638aa6d46a.scope. Apr 12 18:57:29.802467 systemd[1]: Started cri-containerd-d63cbfda66d4c98d51e640f14061fb4740e2fef463ff47fd8f01cf3d72a6486e.scope. Apr 12 18:57:29.928413 env[1645]: time="2024-04-12T18:57:29.928363054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dbnj5,Uid:c087baf4-492f-4b02-9df5-a6e609fa7bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d63cbfda66d4c98d51e640f14061fb4740e2fef463ff47fd8f01cf3d72a6486e\"" Apr 12 18:57:29.938967 env[1645]: time="2024-04-12T18:57:29.937078084Z" level=info msg="CreateContainer within sandbox \"d63cbfda66d4c98d51e640f14061fb4740e2fef463ff47fd8f01cf3d72a6486e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:57:29.968633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2446714166.mount: Deactivated successfully. Apr 12 18:57:29.969120 env[1645]: time="2024-04-12T18:57:29.969070784Z" level=info msg="CreateContainer within sandbox \"d63cbfda66d4c98d51e640f14061fb4740e2fef463ff47fd8f01cf3d72a6486e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68749159fecad29dbfe30f0b481597ade515e40df6130fc93f621eb450ce93c7\"" Apr 12 18:57:29.970714 env[1645]: time="2024-04-12T18:57:29.970602214Z" level=info msg="StartContainer for \"68749159fecad29dbfe30f0b481597ade515e40df6130fc93f621eb450ce93c7\"" Apr 12 18:57:30.022825 systemd[1]: Started cri-containerd-68749159fecad29dbfe30f0b481597ade515e40df6130fc93f621eb450ce93c7.scope. Apr 12 18:57:30.027463 env[1645]: time="2024-04-12T18:57:30.027405961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-kd49x,Uid:8745d0c5-ba88-4e93-bc92-6552de23a7c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"8199384ea2ee372af839f12cab746ea7c5c4b9d54bbf1a5c0dab1e638aa6d46a\"" Apr 12 18:57:30.033034 env[1645]: time="2024-04-12T18:57:30.031678979Z" level=info msg="CreateContainer within sandbox \"8199384ea2ee372af839f12cab746ea7c5c4b9d54bbf1a5c0dab1e638aa6d46a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 12 18:57:30.073016 env[1645]: time="2024-04-12T18:57:30.072958461Z" level=info msg="CreateContainer within sandbox \"8199384ea2ee372af839f12cab746ea7c5c4b9d54bbf1a5c0dab1e638aa6d46a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7c89a3d1c570f2162826360472ad22ab2e8b6e9f47e7200f7afe556e8eded7f4\"" Apr 12 18:57:30.075201 env[1645]: time="2024-04-12T18:57:30.075163630Z" level=info msg="StartContainer for \"7c89a3d1c570f2162826360472ad22ab2e8b6e9f47e7200f7afe556e8eded7f4\"" Apr 12 18:57:30.133092 systemd[1]: Started cri-containerd-7c89a3d1c570f2162826360472ad22ab2e8b6e9f47e7200f7afe556e8eded7f4.scope. Apr 12 18:57:30.165852 env[1645]: time="2024-04-12T18:57:30.165794895Z" level=info msg="StartContainer for \"68749159fecad29dbfe30f0b481597ade515e40df6130fc93f621eb450ce93c7\" returns successfully" Apr 12 18:57:30.227388 env[1645]: time="2024-04-12T18:57:30.227344953Z" level=info msg="StartContainer for \"7c89a3d1c570f2162826360472ad22ab2e8b6e9f47e7200f7afe556e8eded7f4\" returns successfully" Apr 12 18:57:30.847066 kubelet[2616]: I0412 18:57:30.847029 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-kd49x" podStartSLOduration=33.846766171 podStartE2EDuration="33.846766171s" podCreationTimestamp="2024-04-12 18:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:57:30.809392861 +0000 UTC m=+46.664023480" watchObservedRunningTime="2024-04-12 18:57:30.846766171 +0000 UTC m=+46.701396787" Apr 12 18:57:31.815528 kubelet[2616]: I0412 18:57:31.815491 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dbnj5" podStartSLOduration=34.815444603 podStartE2EDuration="34.815444603s" podCreationTimestamp="2024-04-12 18:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:57:30.848280316 +0000 UTC m=+46.702910934" watchObservedRunningTime="2024-04-12 18:57:31.815444603 +0000 UTC m=+47.670075279" Apr 12 18:57:36.501098 systemd[1]: Started sshd@5-172.31.18.181:22-147.75.109.163:59142.service. Apr 12 18:57:36.743600 sshd[4033]: Accepted publickey for core from 147.75.109.163 port 59142 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:36.748278 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:36.763377 systemd[1]: Started session-6.scope. Apr 12 18:57:36.764394 systemd-logind[1636]: New session 6 of user core. Apr 12 18:57:37.114616 sshd[4033]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:37.118957 systemd[1]: sshd@5-172.31.18.181:22-147.75.109.163:59142.service: Deactivated successfully. Apr 12 18:57:37.119908 systemd[1]: session-6.scope: Deactivated successfully. Apr 12 18:57:37.120880 systemd-logind[1636]: Session 6 logged out. Waiting for processes to exit. Apr 12 18:57:37.121984 systemd-logind[1636]: Removed session 6. Apr 12 18:57:42.143099 systemd[1]: Started sshd@6-172.31.18.181:22-147.75.109.163:40784.service. Apr 12 18:57:42.317963 sshd[4047]: Accepted publickey for core from 147.75.109.163 port 40784 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:42.320066 sshd[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:42.327150 systemd[1]: Started session-7.scope. Apr 12 18:57:42.328178 systemd-logind[1636]: New session 7 of user core. Apr 12 18:57:42.585234 sshd[4047]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:42.589307 systemd-logind[1636]: Session 7 logged out. Waiting for processes to exit. Apr 12 18:57:42.589515 systemd[1]: sshd@6-172.31.18.181:22-147.75.109.163:40784.service: Deactivated successfully. Apr 12 18:57:42.590670 systemd[1]: session-7.scope: Deactivated successfully. Apr 12 18:57:42.591932 systemd-logind[1636]: Removed session 7. Apr 12 18:57:47.612869 systemd[1]: Started sshd@7-172.31.18.181:22-147.75.109.163:57004.service. Apr 12 18:57:47.786381 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 57004 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:47.788256 sshd[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:47.794389 systemd-logind[1636]: New session 8 of user core. Apr 12 18:57:47.795593 systemd[1]: Started session-8.scope. Apr 12 18:57:48.026866 sshd[4062]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:48.030997 systemd[1]: sshd@7-172.31.18.181:22-147.75.109.163:57004.service: Deactivated successfully. Apr 12 18:57:48.032657 systemd[1]: session-8.scope: Deactivated successfully. Apr 12 18:57:48.033999 systemd-logind[1636]: Session 8 logged out. Waiting for processes to exit. Apr 12 18:57:48.035348 systemd-logind[1636]: Removed session 8. Apr 12 18:57:53.063775 systemd[1]: Started sshd@8-172.31.18.181:22-147.75.109.163:57014.service. Apr 12 18:57:53.259886 sshd[4075]: Accepted publickey for core from 147.75.109.163 port 57014 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:53.261299 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:53.267852 systemd[1]: Started session-9.scope. Apr 12 18:57:53.268545 systemd-logind[1636]: New session 9 of user core. Apr 12 18:57:53.478793 sshd[4075]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:53.483209 systemd[1]: sshd@8-172.31.18.181:22-147.75.109.163:57014.service: Deactivated successfully. Apr 12 18:57:53.484636 systemd[1]: session-9.scope: Deactivated successfully. Apr 12 18:57:53.486814 systemd-logind[1636]: Session 9 logged out. Waiting for processes to exit. Apr 12 18:57:53.489459 systemd-logind[1636]: Removed session 9. Apr 12 18:57:58.505047 systemd[1]: Started sshd@9-172.31.18.181:22-147.75.109.163:59198.service. Apr 12 18:57:58.690536 sshd[4091]: Accepted publickey for core from 147.75.109.163 port 59198 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:58.692376 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:58.701205 systemd[1]: Started session-10.scope. Apr 12 18:57:58.702084 systemd-logind[1636]: New session 10 of user core. Apr 12 18:57:58.923011 sshd[4091]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:58.930703 systemd[1]: sshd@9-172.31.18.181:22-147.75.109.163:59198.service: Deactivated successfully. Apr 12 18:57:58.931652 systemd[1]: session-10.scope: Deactivated successfully. Apr 12 18:57:58.932391 systemd-logind[1636]: Session 10 logged out. Waiting for processes to exit. Apr 12 18:57:58.933360 systemd-logind[1636]: Removed session 10. Apr 12 18:57:58.949905 systemd[1]: Started sshd@10-172.31.18.181:22-147.75.109.163:59206.service. Apr 12 18:57:59.121560 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 59206 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:59.125225 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:59.134658 systemd-logind[1636]: New session 11 of user core. Apr 12 18:57:59.135056 systemd[1]: Started session-11.scope. Apr 12 18:57:59.434118 sshd[4104]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:59.447926 systemd[1]: sshd@10-172.31.18.181:22-147.75.109.163:59206.service: Deactivated successfully. Apr 12 18:57:59.451057 systemd[1]: session-11.scope: Deactivated successfully. Apr 12 18:57:59.451800 systemd-logind[1636]: Session 11 logged out. Waiting for processes to exit. Apr 12 18:57:59.471380 systemd[1]: Started sshd@11-172.31.18.181:22-147.75.109.163:59216.service. Apr 12 18:57:59.475493 systemd-logind[1636]: Removed session 11. Apr 12 18:57:59.694502 sshd[4114]: Accepted publickey for core from 147.75.109.163 port 59216 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:57:59.688172 sshd[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:57:59.705150 systemd[1]: Started session-12.scope. Apr 12 18:57:59.706066 systemd-logind[1636]: New session 12 of user core. Apr 12 18:57:59.941164 sshd[4114]: pam_unix(sshd:session): session closed for user core Apr 12 18:57:59.952404 systemd[1]: sshd@11-172.31.18.181:22-147.75.109.163:59216.service: Deactivated successfully. Apr 12 18:57:59.954293 systemd[1]: session-12.scope: Deactivated successfully. Apr 12 18:57:59.959371 systemd-logind[1636]: Session 12 logged out. Waiting for processes to exit. Apr 12 18:57:59.963488 systemd-logind[1636]: Removed session 12. Apr 12 18:58:04.974927 systemd[1]: Started sshd@12-172.31.18.181:22-147.75.109.163:59228.service. Apr 12 18:58:05.157293 sshd[4128]: Accepted publickey for core from 147.75.109.163 port 59228 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:05.158786 sshd[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:05.164445 systemd[1]: Started session-13.scope. Apr 12 18:58:05.164978 systemd-logind[1636]: New session 13 of user core. Apr 12 18:58:05.413860 sshd[4128]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:05.419374 systemd-logind[1636]: Session 13 logged out. Waiting for processes to exit. Apr 12 18:58:05.419848 systemd[1]: sshd@12-172.31.18.181:22-147.75.109.163:59228.service: Deactivated successfully. Apr 12 18:58:05.420988 systemd[1]: session-13.scope: Deactivated successfully. Apr 12 18:58:05.422939 systemd-logind[1636]: Removed session 13. Apr 12 18:58:10.452272 systemd[1]: Started sshd@13-172.31.18.181:22-147.75.109.163:51712.service. Apr 12 18:58:10.657566 sshd[4140]: Accepted publickey for core from 147.75.109.163 port 51712 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:10.659346 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:10.665426 systemd[1]: Started session-14.scope. Apr 12 18:58:10.666385 systemd-logind[1636]: New session 14 of user core. Apr 12 18:58:10.916661 sshd[4140]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:10.920967 systemd-logind[1636]: Session 14 logged out. Waiting for processes to exit. Apr 12 18:58:10.921191 systemd[1]: sshd@13-172.31.18.181:22-147.75.109.163:51712.service: Deactivated successfully. Apr 12 18:58:10.922210 systemd[1]: session-14.scope: Deactivated successfully. Apr 12 18:58:10.923433 systemd-logind[1636]: Removed session 14. Apr 12 18:58:15.945722 systemd[1]: Started sshd@14-172.31.18.181:22-147.75.109.163:51724.service. Apr 12 18:58:16.136052 sshd[4152]: Accepted publickey for core from 147.75.109.163 port 51724 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:16.138271 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:16.146510 systemd-logind[1636]: New session 15 of user core. Apr 12 18:58:16.147588 systemd[1]: Started session-15.scope. Apr 12 18:58:16.364310 sshd[4152]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:16.368384 systemd-logind[1636]: Session 15 logged out. Waiting for processes to exit. Apr 12 18:58:16.368969 systemd[1]: sshd@14-172.31.18.181:22-147.75.109.163:51724.service: Deactivated successfully. Apr 12 18:58:16.370210 systemd[1]: session-15.scope: Deactivated successfully. Apr 12 18:58:16.372032 systemd-logind[1636]: Removed session 15. Apr 12 18:58:16.402646 systemd[1]: Started sshd@15-172.31.18.181:22-147.75.109.163:51732.service. Apr 12 18:58:16.615203 sshd[4164]: Accepted publickey for core from 147.75.109.163 port 51732 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:16.617111 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:16.623091 systemd[1]: Started session-16.scope. Apr 12 18:58:16.624962 systemd-logind[1636]: New session 16 of user core. Apr 12 18:58:17.352555 sshd[4164]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:17.357241 systemd[1]: sshd@15-172.31.18.181:22-147.75.109.163:51732.service: Deactivated successfully. Apr 12 18:58:17.358366 systemd[1]: session-16.scope: Deactivated successfully. Apr 12 18:58:17.359871 systemd-logind[1636]: Session 16 logged out. Waiting for processes to exit. Apr 12 18:58:17.361305 systemd-logind[1636]: Removed session 16. Apr 12 18:58:17.380523 systemd[1]: Started sshd@16-172.31.18.181:22-147.75.109.163:41426.service. Apr 12 18:58:17.581135 sshd[4174]: Accepted publickey for core from 147.75.109.163 port 41426 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:17.582929 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:17.601321 systemd[1]: Started session-17.scope. Apr 12 18:58:17.602355 systemd-logind[1636]: New session 17 of user core. Apr 12 18:58:19.989488 sshd[4174]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:19.998909 systemd[1]: sshd@16-172.31.18.181:22-147.75.109.163:41426.service: Deactivated successfully. Apr 12 18:58:19.999941 systemd[1]: session-17.scope: Deactivated successfully. Apr 12 18:58:20.000767 systemd-logind[1636]: Session 17 logged out. Waiting for processes to exit. Apr 12 18:58:20.001902 systemd-logind[1636]: Removed session 17. Apr 12 18:58:20.016111 systemd[1]: Started sshd@17-172.31.18.181:22-147.75.109.163:41436.service. Apr 12 18:58:20.186180 sshd[4193]: Accepted publickey for core from 147.75.109.163 port 41436 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:20.187956 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:20.197629 systemd[1]: Started session-18.scope. Apr 12 18:58:20.198410 systemd-logind[1636]: New session 18 of user core. Apr 12 18:58:20.673918 sshd[4193]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:20.677564 systemd[1]: sshd@17-172.31.18.181:22-147.75.109.163:41436.service: Deactivated successfully. Apr 12 18:58:20.679042 systemd[1]: session-18.scope: Deactivated successfully. Apr 12 18:58:20.680451 systemd-logind[1636]: Session 18 logged out. Waiting for processes to exit. Apr 12 18:58:20.681946 systemd-logind[1636]: Removed session 18. Apr 12 18:58:20.703077 systemd[1]: Started sshd@18-172.31.18.181:22-147.75.109.163:41438.service. Apr 12 18:58:20.874736 sshd[4203]: Accepted publickey for core from 147.75.109.163 port 41438 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:20.876450 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:20.882301 systemd-logind[1636]: New session 19 of user core. Apr 12 18:58:20.883503 systemd[1]: Started session-19.scope. Apr 12 18:58:21.087268 sshd[4203]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:21.092289 systemd-logind[1636]: Session 19 logged out. Waiting for processes to exit. Apr 12 18:58:21.094291 systemd[1]: sshd@18-172.31.18.181:22-147.75.109.163:41438.service: Deactivated successfully. Apr 12 18:58:21.095337 systemd[1]: session-19.scope: Deactivated successfully. Apr 12 18:58:21.096374 systemd-logind[1636]: Removed session 19. Apr 12 18:58:26.115716 systemd[1]: Started sshd@19-172.31.18.181:22-147.75.109.163:41452.service. Apr 12 18:58:26.302447 sshd[4215]: Accepted publickey for core from 147.75.109.163 port 41452 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:26.304457 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:26.312136 systemd[1]: Started session-20.scope. Apr 12 18:58:26.313732 systemd-logind[1636]: New session 20 of user core. Apr 12 18:58:26.529666 sshd[4215]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:26.533701 systemd-logind[1636]: Session 20 logged out. Waiting for processes to exit. Apr 12 18:58:26.533901 systemd[1]: sshd@19-172.31.18.181:22-147.75.109.163:41452.service: Deactivated successfully. Apr 12 18:58:26.534924 systemd[1]: session-20.scope: Deactivated successfully. Apr 12 18:58:26.535991 systemd-logind[1636]: Removed session 20. Apr 12 18:58:31.561589 systemd[1]: Started sshd@20-172.31.18.181:22-147.75.109.163:49020.service. Apr 12 18:58:31.744786 sshd[4232]: Accepted publickey for core from 147.75.109.163 port 49020 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:31.744201 sshd[4232]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:31.753397 systemd-logind[1636]: New session 21 of user core. Apr 12 18:58:31.754310 systemd[1]: Started session-21.scope. Apr 12 18:58:31.986564 sshd[4232]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:31.991515 systemd[1]: sshd@20-172.31.18.181:22-147.75.109.163:49020.service: Deactivated successfully. Apr 12 18:58:31.992420 systemd[1]: session-21.scope: Deactivated successfully. Apr 12 18:58:31.993251 systemd-logind[1636]: Session 21 logged out. Waiting for processes to exit. Apr 12 18:58:31.994256 systemd-logind[1636]: Removed session 21. Apr 12 18:58:37.015408 systemd[1]: Started sshd@21-172.31.18.181:22-147.75.109.163:58000.service. Apr 12 18:58:37.189747 sshd[4244]: Accepted publickey for core from 147.75.109.163 port 58000 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:37.191308 sshd[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:37.197906 systemd[1]: Started session-22.scope. Apr 12 18:58:37.198938 systemd-logind[1636]: New session 22 of user core. Apr 12 18:58:37.407797 sshd[4244]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:37.416628 systemd[1]: sshd@21-172.31.18.181:22-147.75.109.163:58000.service: Deactivated successfully. Apr 12 18:58:37.422737 systemd[1]: session-22.scope: Deactivated successfully. Apr 12 18:58:37.427717 systemd-logind[1636]: Session 22 logged out. Waiting for processes to exit. Apr 12 18:58:37.429718 systemd-logind[1636]: Removed session 22. Apr 12 18:58:42.439233 systemd[1]: Started sshd@22-172.31.18.181:22-147.75.109.163:58006.service. Apr 12 18:58:42.617340 sshd[4256]: Accepted publickey for core from 147.75.109.163 port 58006 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:42.619282 sshd[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:42.627753 systemd[1]: Started session-23.scope. Apr 12 18:58:42.628349 systemd-logind[1636]: New session 23 of user core. Apr 12 18:58:42.849510 sshd[4256]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:42.863343 systemd-logind[1636]: Session 23 logged out. Waiting for processes to exit. Apr 12 18:58:42.863764 systemd[1]: sshd@22-172.31.18.181:22-147.75.109.163:58006.service: Deactivated successfully. Apr 12 18:58:42.866135 systemd[1]: session-23.scope: Deactivated successfully. Apr 12 18:58:42.868341 systemd-logind[1636]: Removed session 23. Apr 12 18:58:42.879960 systemd[1]: Started sshd@23-172.31.18.181:22-147.75.109.163:58010.service. Apr 12 18:58:43.070880 sshd[4268]: Accepted publickey for core from 147.75.109.163 port 58010 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:43.072741 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:43.080999 systemd[1]: Started session-24.scope. Apr 12 18:58:43.082243 systemd-logind[1636]: New session 24 of user core. Apr 12 18:58:45.064900 systemd[1]: run-containerd-runc-k8s.io-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366-runc.JopvYI.mount: Deactivated successfully. Apr 12 18:58:45.072902 env[1645]: time="2024-04-12T18:58:45.072286092Z" level=info msg="StopContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" with timeout 30 (s)" Apr 12 18:58:45.073929 env[1645]: time="2024-04-12T18:58:45.072994681Z" level=info msg="Stop container \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" with signal terminated" Apr 12 18:58:45.097853 systemd[1]: cri-containerd-2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90.scope: Deactivated successfully. Apr 12 18:58:45.143732 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90-rootfs.mount: Deactivated successfully. Apr 12 18:58:45.147484 env[1645]: time="2024-04-12T18:58:45.147390201Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 12 18:58:45.156485 env[1645]: time="2024-04-12T18:58:45.156445864Z" level=info msg="StopContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" with timeout 2 (s)" Apr 12 18:58:45.157247 env[1645]: time="2024-04-12T18:58:45.156740896Z" level=info msg="Stop container \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" with signal terminated" Apr 12 18:58:45.163190 env[1645]: time="2024-04-12T18:58:45.163133477Z" level=info msg="shim disconnected" id=2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90 Apr 12 18:58:45.163190 env[1645]: time="2024-04-12T18:58:45.163192919Z" level=warning msg="cleaning up after shim disconnected" id=2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90 namespace=k8s.io Apr 12 18:58:45.163483 env[1645]: time="2024-04-12T18:58:45.163204777Z" level=info msg="cleaning up dead shim" Apr 12 18:58:45.166780 systemd-networkd[1459]: lxc_health: Link DOWN Apr 12 18:58:45.166788 systemd-networkd[1459]: lxc_health: Lost carrier Apr 12 18:58:45.233372 env[1645]: time="2024-04-12T18:58:45.233323900Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4323 runtime=io.containerd.runc.v2\n" Apr 12 18:58:45.359925 env[1645]: time="2024-04-12T18:58:45.345872213Z" level=info msg="StopContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" returns successfully" Apr 12 18:58:45.359925 env[1645]: time="2024-04-12T18:58:45.347112942Z" level=info msg="StopPodSandbox for \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\"" Apr 12 18:58:45.359925 env[1645]: time="2024-04-12T18:58:45.347186948Z" level=info msg="Container to stop \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.351264 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50-shm.mount: Deactivated successfully. Apr 12 18:58:45.361068 systemd[1]: cri-containerd-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366.scope: Deactivated successfully. Apr 12 18:58:45.361380 systemd[1]: cri-containerd-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366.scope: Consumed 9.195s CPU time. Apr 12 18:58:45.367402 systemd[1]: cri-containerd-6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50.scope: Deactivated successfully. Apr 12 18:58:45.452474 env[1645]: time="2024-04-12T18:58:45.452424843Z" level=info msg="shim disconnected" id=6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50 Apr 12 18:58:45.452767 env[1645]: time="2024-04-12T18:58:45.452669977Z" level=warning msg="cleaning up after shim disconnected" id=6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50 namespace=k8s.io Apr 12 18:58:45.452767 env[1645]: time="2024-04-12T18:58:45.452701867Z" level=info msg="cleaning up dead shim" Apr 12 18:58:45.452987 env[1645]: time="2024-04-12T18:58:45.452422775Z" level=info msg="shim disconnected" id=f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366 Apr 12 18:58:45.453640 env[1645]: time="2024-04-12T18:58:45.453511870Z" level=warning msg="cleaning up after shim disconnected" id=f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366 namespace=k8s.io Apr 12 18:58:45.453780 env[1645]: time="2024-04-12T18:58:45.453759964Z" level=info msg="cleaning up dead shim" Apr 12 18:58:45.473492 env[1645]: time="2024-04-12T18:58:45.473445652Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4370 runtime=io.containerd.runc.v2\n" Apr 12 18:58:45.474688 env[1645]: time="2024-04-12T18:58:45.473688470Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4369 runtime=io.containerd.runc.v2\n" Apr 12 18:58:45.475129 env[1645]: time="2024-04-12T18:58:45.475097102Z" level=info msg="TearDown network for sandbox \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\" successfully" Apr 12 18:58:45.475219 env[1645]: time="2024-04-12T18:58:45.475124818Z" level=info msg="StopPodSandbox for \"6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50\" returns successfully" Apr 12 18:58:45.478315 env[1645]: time="2024-04-12T18:58:45.478219983Z" level=info msg="StopContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" returns successfully" Apr 12 18:58:45.480712 env[1645]: time="2024-04-12T18:58:45.480680638Z" level=info msg="StopPodSandbox for \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\"" Apr 12 18:58:45.480909 env[1645]: time="2024-04-12T18:58:45.480747687Z" level=info msg="Container to stop \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.480909 env[1645]: time="2024-04-12T18:58:45.480769254Z" level=info msg="Container to stop \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.480909 env[1645]: time="2024-04-12T18:58:45.480786774Z" level=info msg="Container to stop \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.480909 env[1645]: time="2024-04-12T18:58:45.480803866Z" level=info msg="Container to stop \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.481321 env[1645]: time="2024-04-12T18:58:45.480904518Z" level=info msg="Container to stop \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:45.493234 systemd[1]: cri-containerd-0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127.scope: Deactivated successfully. Apr 12 18:58:45.538712 env[1645]: time="2024-04-12T18:58:45.538656463Z" level=info msg="shim disconnected" id=0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127 Apr 12 18:58:45.539111 env[1645]: time="2024-04-12T18:58:45.538717918Z" level=warning msg="cleaning up after shim disconnected" id=0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127 namespace=k8s.io Apr 12 18:58:45.539111 env[1645]: time="2024-04-12T18:58:45.538730363Z" level=info msg="cleaning up dead shim" Apr 12 18:58:45.553739 env[1645]: time="2024-04-12T18:58:45.553689186Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4415 runtime=io.containerd.runc.v2\n" Apr 12 18:58:45.554078 env[1645]: time="2024-04-12T18:58:45.554040019Z" level=info msg="TearDown network for sandbox \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" successfully" Apr 12 18:58:45.554168 env[1645]: time="2024-04-12T18:58:45.554076221Z" level=info msg="StopPodSandbox for \"0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127\" returns successfully" Apr 12 18:58:45.604373 kubelet[2616]: I0412 18:58:45.604325 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7j7w\" (UniqueName: \"kubernetes.io/projected/20519993-eed6-4b35-a793-45e8c3bf50e1-kube-api-access-g7j7w\") pod \"20519993-eed6-4b35-a793-45e8c3bf50e1\" (UID: \"20519993-eed6-4b35-a793-45e8c3bf50e1\") " Apr 12 18:58:45.604920 kubelet[2616]: I0412 18:58:45.604390 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20519993-eed6-4b35-a793-45e8c3bf50e1-cilium-config-path\") pod \"20519993-eed6-4b35-a793-45e8c3bf50e1\" (UID: \"20519993-eed6-4b35-a793-45e8c3bf50e1\") " Apr 12 18:58:45.610144 kubelet[2616]: I0412 18:58:45.608718 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20519993-eed6-4b35-a793-45e8c3bf50e1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20519993-eed6-4b35-a793-45e8c3bf50e1" (UID: "20519993-eed6-4b35-a793-45e8c3bf50e1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:58:45.621765 kubelet[2616]: I0412 18:58:45.621715 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20519993-eed6-4b35-a793-45e8c3bf50e1-kube-api-access-g7j7w" (OuterVolumeSpecName: "kube-api-access-g7j7w") pod "20519993-eed6-4b35-a793-45e8c3bf50e1" (UID: "20519993-eed6-4b35-a793-45e8c3bf50e1"). InnerVolumeSpecName "kube-api-access-g7j7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:58:45.704892 kubelet[2616]: I0412 18:58:45.704847 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjp6g\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-kube-api-access-xjp6g\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.704908 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cni-path\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.704938 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hubble-tls\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.704968 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ea48d00-dae2-491c-8e54-adcc87ea9bef-clustermesh-secrets\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.705305 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-net\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.705334 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-cgroup\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.706235 kubelet[2616]: I0412 18:58:45.706171 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hostproc\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.706219 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-run\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.706248 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-xtables-lock\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.706275 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-lib-modules\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.706297 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-bpf-maps\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.708563 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-config-path\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.708832 kubelet[2616]: I0412 18:58:45.708647 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-kernel\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.709171 kubelet[2616]: I0412 18:58:45.708673 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-etc-cni-netd\") pod \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\" (UID: \"3ea48d00-dae2-491c-8e54-adcc87ea9bef\") " Apr 12 18:58:45.709171 kubelet[2616]: I0412 18:58:45.708848 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hostproc" (OuterVolumeSpecName: "hostproc") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.709263 kubelet[2616]: I0412 18:58:45.709196 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20519993-eed6-4b35-a793-45e8c3bf50e1-cilium-config-path\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.709263 kubelet[2616]: I0412 18:58:45.709225 2616 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g7j7w\" (UniqueName: \"kubernetes.io/projected/20519993-eed6-4b35-a793-45e8c3bf50e1-kube-api-access-g7j7w\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.709357 kubelet[2616]: I0412 18:58:45.709263 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.709357 kubelet[2616]: I0412 18:58:45.709301 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.709357 kubelet[2616]: I0412 18:58:45.709331 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.709490 kubelet[2616]: I0412 18:58:45.709357 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.709490 kubelet[2616]: I0412 18:58:45.709383 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.710709 kubelet[2616]: I0412 18:58:45.710682 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cni-path" (OuterVolumeSpecName: "cni-path") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.712149 kubelet[2616]: I0412 18:58:45.712124 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.712499 kubelet[2616]: I0412 18:58:45.712257 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.712585 kubelet[2616]: I0412 18:58:45.712461 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:45.715976 kubelet[2616]: I0412 18:58:45.715939 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:58:45.725427 kubelet[2616]: I0412 18:58:45.725319 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:58:45.726390 kubelet[2616]: I0412 18:58:45.726359 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ea48d00-dae2-491c-8e54-adcc87ea9bef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:58:45.732795 kubelet[2616]: I0412 18:58:45.732739 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-kube-api-access-xjp6g" (OuterVolumeSpecName: "kube-api-access-xjp6g") pod "3ea48d00-dae2-491c-8e54-adcc87ea9bef" (UID: "3ea48d00-dae2-491c-8e54-adcc87ea9bef"). InnerVolumeSpecName "kube-api-access-xjp6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:58:45.810409 kubelet[2616]: I0412 18:58:45.810366 2616 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cni-path\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810409 kubelet[2616]: I0412 18:58:45.810411 2616 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hubble-tls\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810432 2616 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xjp6g\" (UniqueName: \"kubernetes.io/projected/3ea48d00-dae2-491c-8e54-adcc87ea9bef-kube-api-access-xjp6g\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810449 2616 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3ea48d00-dae2-491c-8e54-adcc87ea9bef-clustermesh-secrets\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810466 2616 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-hostproc\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810478 2616 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-net\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810490 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-cgroup\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810502 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-run\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810515 2616 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-xtables-lock\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810672 kubelet[2616]: I0412 18:58:45.810528 2616 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-lib-modules\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810904 kubelet[2616]: I0412 18:58:45.810540 2616 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-bpf-maps\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810904 kubelet[2616]: I0412 18:58:45.810553 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3ea48d00-dae2-491c-8e54-adcc87ea9bef-cilium-config-path\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810904 kubelet[2616]: I0412 18:58:45.810567 2616 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-etc-cni-netd\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:45.810904 kubelet[2616]: I0412 18:58:45.810597 2616 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3ea48d00-dae2-491c-8e54-adcc87ea9bef-host-proc-sys-kernel\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:46.055322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366-rootfs.mount: Deactivated successfully. Apr 12 18:58:46.055530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6feabfa9e7652ce5c63f1fe22cd48687773fa616f8fd5e431434a28858f43f50-rootfs.mount: Deactivated successfully. Apr 12 18:58:46.055631 systemd[1]: var-lib-kubelet-pods-20519993\x2deed6\x2d4b35\x2da793\x2d45e8c3bf50e1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg7j7w.mount: Deactivated successfully. Apr 12 18:58:46.055716 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127-rootfs.mount: Deactivated successfully. Apr 12 18:58:46.055801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bfdd264b27337351370860065e288881ba6c161a127495f9db8ba9457e40127-shm.mount: Deactivated successfully. Apr 12 18:58:46.055899 systemd[1]: var-lib-kubelet-pods-3ea48d00\x2ddae2\x2d491c\x2d8e54\x2dadcc87ea9bef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxjp6g.mount: Deactivated successfully. Apr 12 18:58:46.055983 systemd[1]: var-lib-kubelet-pods-3ea48d00\x2ddae2\x2d491c\x2d8e54\x2dadcc87ea9bef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:58:46.056064 systemd[1]: var-lib-kubelet-pods-3ea48d00\x2ddae2\x2d491c\x2d8e54\x2dadcc87ea9bef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:58:46.062562 kubelet[2616]: I0412 18:58:46.062530 2616 scope.go:117] "RemoveContainer" containerID="2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90" Apr 12 18:58:46.069324 env[1645]: time="2024-04-12T18:58:46.068363578Z" level=info msg="RemoveContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\"" Apr 12 18:58:46.081830 systemd[1]: Removed slice kubepods-besteffort-pod20519993_eed6_4b35_a793_45e8c3bf50e1.slice. Apr 12 18:58:46.089432 env[1645]: time="2024-04-12T18:58:46.089388216Z" level=info msg="RemoveContainer for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" returns successfully" Apr 12 18:58:46.090721 kubelet[2616]: I0412 18:58:46.090676 2616 scope.go:117] "RemoveContainer" containerID="2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90" Apr 12 18:58:46.092604 env[1645]: time="2024-04-12T18:58:46.092362069Z" level=error msg="ContainerStatus for \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\": not found" Apr 12 18:58:46.093311 systemd[1]: Removed slice kubepods-burstable-pod3ea48d00_dae2_491c_8e54_adcc87ea9bef.slice. Apr 12 18:58:46.093420 systemd[1]: kubepods-burstable-pod3ea48d00_dae2_491c_8e54_adcc87ea9bef.slice: Consumed 9.323s CPU time. Apr 12 18:58:46.099393 kubelet[2616]: E0412 18:58:46.099332 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\": not found" containerID="2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90" Apr 12 18:58:46.102270 kubelet[2616]: I0412 18:58:46.102126 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90"} err="failed to get container status \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\": rpc error: code = NotFound desc = an error occurred when try to find container \"2542e6e27a979b22949f01e3c1579dfccb0be1b19bfb543f9e28f8801a475d90\": not found" Apr 12 18:58:46.102719 kubelet[2616]: I0412 18:58:46.102686 2616 scope.go:117] "RemoveContainer" containerID="f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366" Apr 12 18:58:46.109640 env[1645]: time="2024-04-12T18:58:46.109047567Z" level=info msg="RemoveContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\"" Apr 12 18:58:46.115175 env[1645]: time="2024-04-12T18:58:46.115119307Z" level=info msg="RemoveContainer for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" returns successfully" Apr 12 18:58:46.115582 kubelet[2616]: I0412 18:58:46.115538 2616 scope.go:117] "RemoveContainer" containerID="f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529" Apr 12 18:58:46.117268 env[1645]: time="2024-04-12T18:58:46.116949179Z" level=info msg="RemoveContainer for \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\"" Apr 12 18:58:46.123069 env[1645]: time="2024-04-12T18:58:46.123007973Z" level=info msg="RemoveContainer for \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\" returns successfully" Apr 12 18:58:46.123844 kubelet[2616]: I0412 18:58:46.123817 2616 scope.go:117] "RemoveContainer" containerID="69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3" Apr 12 18:58:46.128515 env[1645]: time="2024-04-12T18:58:46.128435997Z" level=info msg="RemoveContainer for \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\"" Apr 12 18:58:46.133786 env[1645]: time="2024-04-12T18:58:46.133740219Z" level=info msg="RemoveContainer for \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\" returns successfully" Apr 12 18:58:46.134119 kubelet[2616]: I0412 18:58:46.134099 2616 scope.go:117] "RemoveContainer" containerID="76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270" Apr 12 18:58:46.136936 env[1645]: time="2024-04-12T18:58:46.136795147Z" level=info msg="RemoveContainer for \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\"" Apr 12 18:58:46.141421 env[1645]: time="2024-04-12T18:58:46.141376163Z" level=info msg="RemoveContainer for \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\" returns successfully" Apr 12 18:58:46.141829 kubelet[2616]: I0412 18:58:46.141801 2616 scope.go:117] "RemoveContainer" containerID="e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6" Apr 12 18:58:46.145077 env[1645]: time="2024-04-12T18:58:46.145021988Z" level=info msg="RemoveContainer for \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\"" Apr 12 18:58:46.150003 env[1645]: time="2024-04-12T18:58:46.149941283Z" level=info msg="RemoveContainer for \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\" returns successfully" Apr 12 18:58:46.150438 kubelet[2616]: I0412 18:58:46.150409 2616 scope.go:117] "RemoveContainer" containerID="f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366" Apr 12 18:58:46.150794 env[1645]: time="2024-04-12T18:58:46.150706099Z" level=error msg="ContainerStatus for \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\": not found" Apr 12 18:58:46.151264 kubelet[2616]: E0412 18:58:46.151241 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\": not found" containerID="f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366" Apr 12 18:58:46.151356 kubelet[2616]: I0412 18:58:46.151307 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366"} err="failed to get container status \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\": rpc error: code = NotFound desc = an error occurred when try to find container \"f880fc7945711134e8c23f066053e2982eeacbabd1afebcb427a92804d5d8366\": not found" Apr 12 18:58:46.151356 kubelet[2616]: I0412 18:58:46.151328 2616 scope.go:117] "RemoveContainer" containerID="f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529" Apr 12 18:58:46.151914 env[1645]: time="2024-04-12T18:58:46.151833920Z" level=error msg="ContainerStatus for \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\": not found" Apr 12 18:58:46.152147 kubelet[2616]: E0412 18:58:46.152043 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\": not found" containerID="f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529" Apr 12 18:58:46.152252 kubelet[2616]: I0412 18:58:46.152159 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529"} err="failed to get container status \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\": rpc error: code = NotFound desc = an error occurred when try to find container \"f28a80d7f6fc7ff3af49a0ca615c482f2cb15bc690bdc26871464e3f40a4d529\": not found" Apr 12 18:58:46.152252 kubelet[2616]: I0412 18:58:46.152177 2616 scope.go:117] "RemoveContainer" containerID="69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3" Apr 12 18:58:46.152509 env[1645]: time="2024-04-12T18:58:46.152443518Z" level=error msg="ContainerStatus for \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\": not found" Apr 12 18:58:46.152721 kubelet[2616]: E0412 18:58:46.152704 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\": not found" containerID="69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3" Apr 12 18:58:46.152851 kubelet[2616]: I0412 18:58:46.152823 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3"} err="failed to get container status \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\": rpc error: code = NotFound desc = an error occurred when try to find container \"69b359725e718c8e30c3c46e19860baef3bc6bc815e03371ddd21ba99473dcd3\": not found" Apr 12 18:58:46.152851 kubelet[2616]: I0412 18:58:46.152853 2616 scope.go:117] "RemoveContainer" containerID="76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270" Apr 12 18:58:46.153121 env[1645]: time="2024-04-12T18:58:46.153050394Z" level=error msg="ContainerStatus for \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\": not found" Apr 12 18:58:46.153463 kubelet[2616]: E0412 18:58:46.153449 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\": not found" containerID="76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270" Apr 12 18:58:46.153593 kubelet[2616]: I0412 18:58:46.153554 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270"} err="failed to get container status \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\": rpc error: code = NotFound desc = an error occurred when try to find container \"76d1287249c592292e57653093babe3e4067e8885c125e2cf3e26b0d544e2270\": not found" Apr 12 18:58:46.153689 kubelet[2616]: I0412 18:58:46.153595 2616 scope.go:117] "RemoveContainer" containerID="e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6" Apr 12 18:58:46.153887 env[1645]: time="2024-04-12T18:58:46.153831628Z" level=error msg="ContainerStatus for \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\": not found" Apr 12 18:58:46.154131 kubelet[2616]: E0412 18:58:46.154113 2616 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\": not found" containerID="e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6" Apr 12 18:58:46.154271 kubelet[2616]: I0412 18:58:46.154228 2616 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6"} err="failed to get container status \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\": rpc error: code = NotFound desc = an error occurred when try to find container \"e70f39d71e05eb4c961f3f14f9a1bc6abbb889224093cefe364e16becc862da6\": not found" Apr 12 18:58:46.481384 kubelet[2616]: I0412 18:58:46.481348 2616 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="20519993-eed6-4b35-a793-45e8c3bf50e1" path="/var/lib/kubelet/pods/20519993-eed6-4b35-a793-45e8c3bf50e1/volumes" Apr 12 18:58:46.481941 kubelet[2616]: I0412 18:58:46.481911 2616 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" path="/var/lib/kubelet/pods/3ea48d00-dae2-491c-8e54-adcc87ea9bef/volumes" Apr 12 18:58:46.951534 sshd[4268]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:46.955564 systemd[1]: sshd@23-172.31.18.181:22-147.75.109.163:58010.service: Deactivated successfully. Apr 12 18:58:46.956446 systemd[1]: session-24.scope: Deactivated successfully. Apr 12 18:58:46.957372 systemd-logind[1636]: Session 24 logged out. Waiting for processes to exit. Apr 12 18:58:46.958427 systemd-logind[1636]: Removed session 24. Apr 12 18:58:46.977144 systemd[1]: Started sshd@24-172.31.18.181:22-147.75.109.163:58022.service. Apr 12 18:58:47.165874 sshd[4434]: Accepted publickey for core from 147.75.109.163 port 58022 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:47.168095 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:47.174701 systemd[1]: Started session-25.scope. Apr 12 18:58:47.175331 systemd-logind[1636]: New session 25 of user core. Apr 12 18:58:47.864170 kubelet[2616]: I0412 18:58:47.864132 2616 topology_manager.go:215] "Topology Admit Handler" podUID="eedee628-1dca-4417-a3f3-d7c9ce6196d8" podNamespace="kube-system" podName="cilium-g2pr4" Apr 12 18:58:47.865772 kubelet[2616]: E0412 18:58:47.865728 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="mount-cgroup" Apr 12 18:58:47.865953 kubelet[2616]: E0412 18:58:47.865939 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="apply-sysctl-overwrites" Apr 12 18:58:47.866065 kubelet[2616]: E0412 18:58:47.866055 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20519993-eed6-4b35-a793-45e8c3bf50e1" containerName="cilium-operator" Apr 12 18:58:47.866199 kubelet[2616]: E0412 18:58:47.866176 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="cilium-agent" Apr 12 18:58:47.866374 kubelet[2616]: E0412 18:58:47.866360 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="mount-bpf-fs" Apr 12 18:58:47.866494 kubelet[2616]: E0412 18:58:47.866484 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="clean-cilium-state" Apr 12 18:58:47.866664 kubelet[2616]: I0412 18:58:47.866643 2616 memory_manager.go:354] "RemoveStaleState removing state" podUID="20519993-eed6-4b35-a793-45e8c3bf50e1" containerName="cilium-operator" Apr 12 18:58:47.866761 kubelet[2616]: I0412 18:58:47.866753 2616 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea48d00-dae2-491c-8e54-adcc87ea9bef" containerName="cilium-agent" Apr 12 18:58:47.867814 sshd[4434]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:47.876315 systemd[1]: sshd@24-172.31.18.181:22-147.75.109.163:58022.service: Deactivated successfully. Apr 12 18:58:47.877502 systemd[1]: session-25.scope: Deactivated successfully. Apr 12 18:58:47.882203 systemd-logind[1636]: Session 25 logged out. Waiting for processes to exit. Apr 12 18:58:47.887113 systemd-logind[1636]: Removed session 25. Apr 12 18:58:47.889890 systemd[1]: Created slice kubepods-burstable-podeedee628_1dca_4417_a3f3_d7c9ce6196d8.slice. Apr 12 18:58:47.916100 systemd[1]: Started sshd@25-172.31.18.181:22-147.75.109.163:34480.service. Apr 12 18:58:47.942832 kubelet[2616]: I0412 18:58:47.942791 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-etc-cni-netd\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.943285 kubelet[2616]: I0412 18:58:47.943226 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-config-path\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.943455 kubelet[2616]: I0412 18:58:47.943442 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-lib-modules\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.943693 kubelet[2616]: I0412 18:58:47.943680 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-bpf-maps\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.943885 kubelet[2616]: I0412 18:58:47.943872 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ph8w\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-kube-api-access-2ph8w\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.944175 kubelet[2616]: I0412 18:58:47.944159 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hubble-tls\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.944368 kubelet[2616]: I0412 18:58:47.944355 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-run\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.944543 kubelet[2616]: I0412 18:58:47.944516 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cni-path\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.944742 kubelet[2616]: I0412 18:58:47.944698 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-clustermesh-secrets\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.944872 kubelet[2616]: I0412 18:58:47.944862 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-kernel\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.945045 kubelet[2616]: I0412 18:58:47.945014 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-cgroup\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.945196 kubelet[2616]: I0412 18:58:47.945178 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-xtables-lock\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.947115 kubelet[2616]: I0412 18:58:47.945332 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-ipsec-secrets\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.947115 kubelet[2616]: I0412 18:58:47.945398 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hostproc\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:47.947115 kubelet[2616]: I0412 18:58:47.945455 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-net\") pod \"cilium-g2pr4\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " pod="kube-system/cilium-g2pr4" Apr 12 18:58:48.120868 sshd[4445]: Accepted publickey for core from 147.75.109.163 port 34480 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:48.122948 sshd[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:48.128642 systemd-logind[1636]: New session 26 of user core. Apr 12 18:58:48.128858 systemd[1]: Started session-26.scope. Apr 12 18:58:48.212613 env[1645]: time="2024-04-12T18:58:48.212333987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2pr4,Uid:eedee628-1dca-4417-a3f3-d7c9ce6196d8,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:48.237464 env[1645]: time="2024-04-12T18:58:48.237295296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:48.237846 env[1645]: time="2024-04-12T18:58:48.237808528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:48.237987 env[1645]: time="2024-04-12T18:58:48.237961607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:48.238460 env[1645]: time="2024-04-12T18:58:48.238306508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530 pid=4460 runtime=io.containerd.runc.v2 Apr 12 18:58:48.270948 systemd[1]: Started cri-containerd-7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530.scope. Apr 12 18:58:48.329479 env[1645]: time="2024-04-12T18:58:48.329431167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g2pr4,Uid:eedee628-1dca-4417-a3f3-d7c9ce6196d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\"" Apr 12 18:58:48.337396 env[1645]: time="2024-04-12T18:58:48.337353281Z" level=info msg="CreateContainer within sandbox \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:58:48.359551 env[1645]: time="2024-04-12T18:58:48.359492174Z" level=info msg="CreateContainer within sandbox \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\"" Apr 12 18:58:48.363775 env[1645]: time="2024-04-12T18:58:48.363734902Z" level=info msg="StartContainer for \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\"" Apr 12 18:58:48.397520 systemd[1]: Started cri-containerd-30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14.scope. Apr 12 18:58:48.439876 sshd[4445]: pam_unix(sshd:session): session closed for user core Apr 12 18:58:48.443718 systemd-logind[1636]: Session 26 logged out. Waiting for processes to exit. Apr 12 18:58:48.443949 systemd[1]: sshd@25-172.31.18.181:22-147.75.109.163:34480.service: Deactivated successfully. Apr 12 18:58:48.444908 systemd[1]: session-26.scope: Deactivated successfully. Apr 12 18:58:48.445990 systemd-logind[1636]: Removed session 26. Apr 12 18:58:48.449669 systemd[1]: cri-containerd-30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14.scope: Deactivated successfully. Apr 12 18:58:48.467285 systemd[1]: Started sshd@26-172.31.18.181:22-147.75.109.163:34494.service. Apr 12 18:58:48.500219 env[1645]: time="2024-04-12T18:58:48.500161397Z" level=info msg="shim disconnected" id=30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14 Apr 12 18:58:48.500646 env[1645]: time="2024-04-12T18:58:48.500612460Z" level=warning msg="cleaning up after shim disconnected" id=30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14 namespace=k8s.io Apr 12 18:58:48.500780 env[1645]: time="2024-04-12T18:58:48.500762313Z" level=info msg="cleaning up dead shim" Apr 12 18:58:48.519092 env[1645]: time="2024-04-12T18:58:48.519042235Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4532 runtime=io.containerd.runc.v2\ntime=\"2024-04-12T18:58:48Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Apr 12 18:58:48.519756 env[1645]: time="2024-04-12T18:58:48.519491758Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Apr 12 18:58:48.520711 env[1645]: time="2024-04-12T18:58:48.520658582Z" level=error msg="Failed to pipe stderr of container \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\"" error="reading from a closed fifo" Apr 12 18:58:48.520926 env[1645]: time="2024-04-12T18:58:48.520858906Z" level=error msg="Failed to pipe stdout of container \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\"" error="reading from a closed fifo" Apr 12 18:58:48.531227 env[1645]: time="2024-04-12T18:58:48.526551449Z" level=error msg="StartContainer for \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Apr 12 18:58:48.531981 kubelet[2616]: E0412 18:58:48.531936 2616 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14" Apr 12 18:58:48.532176 kubelet[2616]: E0412 18:58:48.532152 2616 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Apr 12 18:58:48.532176 kubelet[2616]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Apr 12 18:58:48.532176 kubelet[2616]: rm /hostbin/cilium-mount Apr 12 18:58:48.532410 kubelet[2616]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2ph8w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-g2pr4_kube-system(eedee628-1dca-4417-a3f3-d7c9ce6196d8): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Apr 12 18:58:48.532410 kubelet[2616]: E0412 18:58:48.532229 2616 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-g2pr4" podUID="eedee628-1dca-4417-a3f3-d7c9ce6196d8" Apr 12 18:58:48.656671 sshd[4530]: Accepted publickey for core from 147.75.109.163 port 34494 ssh2: RSA SHA256:+N1xisw2c2FaZUjSYyTG/z1AiN+MoHtibeEcHRhPKVY Apr 12 18:58:48.663790 sshd[4530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Apr 12 18:58:48.683103 systemd[1]: Started session-27.scope. Apr 12 18:58:48.683647 systemd-logind[1636]: New session 27 of user core. Apr 12 18:58:49.094054 env[1645]: time="2024-04-12T18:58:49.094004239Z" level=info msg="StopPodSandbox for \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\"" Apr 12 18:58:49.094333 env[1645]: time="2024-04-12T18:58:49.094295250Z" level=info msg="Container to stop \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 12 18:58:49.098494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530-shm.mount: Deactivated successfully. Apr 12 18:58:49.113387 systemd[1]: cri-containerd-7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530.scope: Deactivated successfully. Apr 12 18:58:49.159699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530-rootfs.mount: Deactivated successfully. Apr 12 18:58:49.184117 env[1645]: time="2024-04-12T18:58:49.183968154Z" level=info msg="shim disconnected" id=7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530 Apr 12 18:58:49.184117 env[1645]: time="2024-04-12T18:58:49.184035605Z" level=warning msg="cleaning up after shim disconnected" id=7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530 namespace=k8s.io Apr 12 18:58:49.184117 env[1645]: time="2024-04-12T18:58:49.184101340Z" level=info msg="cleaning up dead shim" Apr 12 18:58:49.196832 env[1645]: time="2024-04-12T18:58:49.196777408Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4569 runtime=io.containerd.runc.v2\n" Apr 12 18:58:49.197170 env[1645]: time="2024-04-12T18:58:49.197136541Z" level=info msg="TearDown network for sandbox \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\" successfully" Apr 12 18:58:49.197275 env[1645]: time="2024-04-12T18:58:49.197168204Z" level=info msg="StopPodSandbox for \"7f54dc2b1e74cbfc8dd13d0891c884c999605cb8954be2dfe9441898dba73530\" returns successfully" Apr 12 18:58:49.357551 kubelet[2616]: I0412 18:58:49.357331 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-config-path\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.358441 kubelet[2616]: I0412 18:58:49.358417 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hubble-tls\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.358819 kubelet[2616]: I0412 18:58:49.358802 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-clustermesh-secrets\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.358943 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-net\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.358984 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-bpf-maps\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359011 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-kernel\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359041 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cni-path\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359067 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-xtables-lock\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359094 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-cgroup\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359123 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-lib-modules\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359155 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ph8w\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-kube-api-access-2ph8w\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359183 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-run\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359215 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-ipsec-secrets\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359242 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-etc-cni-netd\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359269 2616 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hostproc\") pod \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\" (UID: \"eedee628-1dca-4417-a3f3-d7c9ce6196d8\") " Apr 12 18:58:49.359738 kubelet[2616]: I0412 18:58:49.359339 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hostproc" (OuterVolumeSpecName: "hostproc") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.362542 kubelet[2616]: I0412 18:58:49.362512 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.362644 kubelet[2616]: I0412 18:58:49.362512 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.362644 kubelet[2616]: I0412 18:58:49.362596 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.363729 kubelet[2616]: I0412 18:58:49.363696 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.369000 kubelet[2616]: I0412 18:58:49.368954 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.369196 kubelet[2616]: I0412 18:58:49.369093 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.369650 kubelet[2616]: I0412 18:58:49.369117 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.369754 kubelet[2616]: I0412 18:58:49.369148 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cni-path" (OuterVolumeSpecName: "cni-path") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.380782 kubelet[2616]: I0412 18:58:49.377548 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 12 18:58:49.380782 kubelet[2616]: I0412 18:58:49.377869 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 12 18:58:49.377994 systemd[1]: var-lib-kubelet-pods-eedee628\x2d1dca\x2d4417\x2da3f3\x2dd7c9ce6196d8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 12 18:58:49.394406 kubelet[2616]: I0412 18:58:49.394354 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:58:49.403219 systemd[1]: var-lib-kubelet-pods-eedee628\x2d1dca\x2d4417\x2da3f3\x2dd7c9ce6196d8-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Apr 12 18:58:49.407371 kubelet[2616]: I0412 18:58:49.407333 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:58:49.414272 kubelet[2616]: I0412 18:58:49.414161 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-kube-api-access-2ph8w" (OuterVolumeSpecName: "kube-api-access-2ph8w") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "kube-api-access-2ph8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 12 18:58:49.416762 kubelet[2616]: I0412 18:58:49.416723 2616 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eedee628-1dca-4417-a3f3-d7c9ce6196d8" (UID: "eedee628-1dca-4417-a3f3-d7c9ce6196d8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 12 18:58:49.460383 kubelet[2616]: I0412 18:58:49.460342 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-cgroup\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460383 kubelet[2616]: I0412 18:58:49.460428 2616 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-lib-modules\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460449 2616 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2ph8w\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-kube-api-access-2ph8w\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460465 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-run\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460506 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-ipsec-secrets\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460518 2616 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hostproc\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460532 2616 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-etc-cni-netd\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460544 2616 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cilium-config-path\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460557 2616 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eedee628-1dca-4417-a3f3-d7c9ce6196d8-hubble-tls\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460634 2616 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eedee628-1dca-4417-a3f3-d7c9ce6196d8-clustermesh-secrets\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460650 2616 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-net\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460664 2616 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-bpf-maps\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460678 2616 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-host-proc-sys-kernel\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460692 2616 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-cni-path\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.460779 kubelet[2616]: I0412 18:58:49.460751 2616 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eedee628-1dca-4417-a3f3-d7c9ce6196d8-xtables-lock\") on node \"ip-172-31-18-181\" DevicePath \"\"" Apr 12 18:58:49.618979 kubelet[2616]: E0412 18:58:49.618878 2616 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 12 18:58:50.058212 systemd[1]: var-lib-kubelet-pods-eedee628\x2d1dca\x2d4417\x2da3f3\x2dd7c9ce6196d8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2ph8w.mount: Deactivated successfully. Apr 12 18:58:50.058344 systemd[1]: var-lib-kubelet-pods-eedee628\x2d1dca\x2d4417\x2da3f3\x2dd7c9ce6196d8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 12 18:58:50.098622 kubelet[2616]: I0412 18:58:50.098597 2616 scope.go:117] "RemoveContainer" containerID="30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14" Apr 12 18:58:50.103112 env[1645]: time="2024-04-12T18:58:50.102683992Z" level=info msg="RemoveContainer for \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\"" Apr 12 18:58:50.106395 systemd[1]: Removed slice kubepods-burstable-podeedee628_1dca_4417_a3f3_d7c9ce6196d8.slice. Apr 12 18:58:50.111673 env[1645]: time="2024-04-12T18:58:50.111621150Z" level=info msg="RemoveContainer for \"30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14\" returns successfully" Apr 12 18:58:50.165352 kubelet[2616]: I0412 18:58:50.165317 2616 topology_manager.go:215] "Topology Admit Handler" podUID="196efae0-2395-44f6-a96a-a8b838b9b6b5" podNamespace="kube-system" podName="cilium-gcbvd" Apr 12 18:58:50.165733 kubelet[2616]: E0412 18:58:50.165714 2616 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eedee628-1dca-4417-a3f3-d7c9ce6196d8" containerName="mount-cgroup" Apr 12 18:58:50.165994 kubelet[2616]: I0412 18:58:50.165980 2616 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedee628-1dca-4417-a3f3-d7c9ce6196d8" containerName="mount-cgroup" Apr 12 18:58:50.176383 systemd[1]: Created slice kubepods-burstable-pod196efae0_2395_44f6_a96a_a8b838b9b6b5.slice. Apr 12 18:58:50.269490 kubelet[2616]: I0412 18:58:50.269409 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/196efae0-2395-44f6-a96a-a8b838b9b6b5-clustermesh-secrets\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.272941 kubelet[2616]: I0412 18:58:50.272857 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-xtables-lock\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.273377 kubelet[2616]: I0412 18:58:50.273362 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-bpf-maps\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.273768 kubelet[2616]: I0412 18:58:50.273740 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nddwd\" (UniqueName: \"kubernetes.io/projected/196efae0-2395-44f6-a96a-a8b838b9b6b5-kube-api-access-nddwd\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.274214 kubelet[2616]: I0412 18:58:50.274195 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-lib-modules\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.274593 kubelet[2616]: I0412 18:58:50.274546 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/196efae0-2395-44f6-a96a-a8b838b9b6b5-cilium-config-path\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.275067 kubelet[2616]: I0412 18:58:50.274988 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-hostproc\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.278585 kubelet[2616]: I0412 18:58:50.275386 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-cni-path\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.278876 kubelet[2616]: I0412 18:58:50.278841 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-etc-cni-netd\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.278986 kubelet[2616]: I0412 18:58:50.278902 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-host-proc-sys-net\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.278986 kubelet[2616]: I0412 18:58:50.278939 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-cilium-run\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.278986 kubelet[2616]: I0412 18:58:50.278967 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/196efae0-2395-44f6-a96a-a8b838b9b6b5-hubble-tls\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.279141 kubelet[2616]: I0412 18:58:50.278996 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-cilium-cgroup\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.279141 kubelet[2616]: I0412 18:58:50.279028 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/196efae0-2395-44f6-a96a-a8b838b9b6b5-cilium-ipsec-secrets\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.279141 kubelet[2616]: I0412 18:58:50.279059 2616 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/196efae0-2395-44f6-a96a-a8b838b9b6b5-host-proc-sys-kernel\") pod \"cilium-gcbvd\" (UID: \"196efae0-2395-44f6-a96a-a8b838b9b6b5\") " pod="kube-system/cilium-gcbvd" Apr 12 18:58:50.483198 kubelet[2616]: I0412 18:58:50.483158 2616 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eedee628-1dca-4417-a3f3-d7c9ce6196d8" path="/var/lib/kubelet/pods/eedee628-1dca-4417-a3f3-d7c9ce6196d8/volumes" Apr 12 18:58:50.497727 env[1645]: time="2024-04-12T18:58:50.497673058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcbvd,Uid:196efae0-2395-44f6-a96a-a8b838b9b6b5,Namespace:kube-system,Attempt:0,}" Apr 12 18:58:50.519210 env[1645]: time="2024-04-12T18:58:50.518152804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 12 18:58:50.519210 env[1645]: time="2024-04-12T18:58:50.518212941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 12 18:58:50.519210 env[1645]: time="2024-04-12T18:58:50.518228712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 12 18:58:50.519210 env[1645]: time="2024-04-12T18:58:50.518601682Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373 pid=4598 runtime=io.containerd.runc.v2 Apr 12 18:58:50.534628 systemd[1]: Started cri-containerd-29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373.scope. Apr 12 18:58:50.574548 env[1645]: time="2024-04-12T18:58:50.574503363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gcbvd,Uid:196efae0-2395-44f6-a96a-a8b838b9b6b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\"" Apr 12 18:58:50.581033 env[1645]: time="2024-04-12T18:58:50.581001347Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 12 18:58:50.599678 env[1645]: time="2024-04-12T18:58:50.599625184Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532\"" Apr 12 18:58:50.602027 env[1645]: time="2024-04-12T18:58:50.601894330Z" level=info msg="StartContainer for \"cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532\"" Apr 12 18:58:50.622830 systemd[1]: Started cri-containerd-cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532.scope. Apr 12 18:58:50.663895 env[1645]: time="2024-04-12T18:58:50.663852373Z" level=info msg="StartContainer for \"cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532\" returns successfully" Apr 12 18:58:50.679879 systemd[1]: cri-containerd-cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532.scope: Deactivated successfully. Apr 12 18:58:50.726597 env[1645]: time="2024-04-12T18:58:50.726520789Z" level=info msg="shim disconnected" id=cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532 Apr 12 18:58:50.726905 env[1645]: time="2024-04-12T18:58:50.726639861Z" level=warning msg="cleaning up after shim disconnected" id=cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532 namespace=k8s.io Apr 12 18:58:50.726905 env[1645]: time="2024-04-12T18:58:50.726655828Z" level=info msg="cleaning up dead shim" Apr 12 18:58:50.737527 env[1645]: time="2024-04-12T18:58:50.737395992Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4685 runtime=io.containerd.runc.v2\n" Apr 12 18:58:51.107534 env[1645]: time="2024-04-12T18:58:51.107419407Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 12 18:58:51.135879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566699737.mount: Deactivated successfully. Apr 12 18:58:51.150775 env[1645]: time="2024-04-12T18:58:51.150714385Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3\"" Apr 12 18:58:51.151708 env[1645]: time="2024-04-12T18:58:51.151669456Z" level=info msg="StartContainer for \"e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3\"" Apr 12 18:58:51.253776 systemd[1]: Started cri-containerd-e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3.scope. Apr 12 18:58:51.392225 env[1645]: time="2024-04-12T18:58:51.392108166Z" level=info msg="StartContainer for \"e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3\" returns successfully" Apr 12 18:58:51.448762 systemd[1]: cri-containerd-e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3.scope: Deactivated successfully. Apr 12 18:58:51.508272 env[1645]: time="2024-04-12T18:58:51.508216362Z" level=info msg="shim disconnected" id=e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3 Apr 12 18:58:51.508748 env[1645]: time="2024-04-12T18:58:51.508714641Z" level=warning msg="cleaning up after shim disconnected" id=e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3 namespace=k8s.io Apr 12 18:58:51.508748 env[1645]: time="2024-04-12T18:58:51.508744159Z" level=info msg="cleaning up dead shim" Apr 12 18:58:51.526119 env[1645]: time="2024-04-12T18:58:51.526025469Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4749 runtime=io.containerd.runc.v2\n" Apr 12 18:58:51.620542 kubelet[2616]: W0412 18:58:51.618607 2616 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeedee628_1dca_4417_a3f3_d7c9ce6196d8.slice/cri-containerd-30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14.scope WatchSource:0}: container "30fde20254fe9ccad90c507e62d6413bab1beb7852aff432457e52435f6d2f14" in namespace "k8s.io": not found Apr 12 18:58:52.060183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3-rootfs.mount: Deactivated successfully. Apr 12 18:58:52.113320 env[1645]: time="2024-04-12T18:58:52.113272745Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 12 18:58:52.149371 env[1645]: time="2024-04-12T18:58:52.149166277Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a\"" Apr 12 18:58:52.151609 env[1645]: time="2024-04-12T18:58:52.150128142Z" level=info msg="StartContainer for \"5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a\"" Apr 12 18:58:52.194145 systemd[1]: Started cri-containerd-5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a.scope. Apr 12 18:58:52.272604 env[1645]: time="2024-04-12T18:58:52.271449545Z" level=info msg="StartContainer for \"5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a\" returns successfully" Apr 12 18:58:52.280342 systemd[1]: cri-containerd-5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a.scope: Deactivated successfully. Apr 12 18:58:52.341585 env[1645]: time="2024-04-12T18:58:52.341006031Z" level=info msg="shim disconnected" id=5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a Apr 12 18:58:52.342022 env[1645]: time="2024-04-12T18:58:52.341990945Z" level=warning msg="cleaning up after shim disconnected" id=5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a namespace=k8s.io Apr 12 18:58:52.342176 env[1645]: time="2024-04-12T18:58:52.342158226Z" level=info msg="cleaning up dead shim" Apr 12 18:58:52.355613 env[1645]: time="2024-04-12T18:58:52.355552308Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4807 runtime=io.containerd.runc.v2\n" Apr 12 18:58:53.068331 systemd[1]: run-containerd-runc-k8s.io-5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a-runc.Zrfhui.mount: Deactivated successfully. Apr 12 18:58:53.069058 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a-rootfs.mount: Deactivated successfully. Apr 12 18:58:53.127755 env[1645]: time="2024-04-12T18:58:53.127710403Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 12 18:58:53.156229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3363779400.mount: Deactivated successfully. Apr 12 18:58:53.165412 env[1645]: time="2024-04-12T18:58:53.165353964Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a\"" Apr 12 18:58:53.166523 env[1645]: time="2024-04-12T18:58:53.166255953Z" level=info msg="StartContainer for \"5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a\"" Apr 12 18:58:53.206053 systemd[1]: Started cri-containerd-5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a.scope. Apr 12 18:58:53.266492 systemd[1]: cri-containerd-5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a.scope: Deactivated successfully. Apr 12 18:58:53.268900 env[1645]: time="2024-04-12T18:58:53.268729268Z" level=info msg="StartContainer for \"5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a\" returns successfully" Apr 12 18:58:53.313261 env[1645]: time="2024-04-12T18:58:53.313206803Z" level=info msg="shim disconnected" id=5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a Apr 12 18:58:53.313800 env[1645]: time="2024-04-12T18:58:53.313379857Z" level=warning msg="cleaning up after shim disconnected" id=5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a namespace=k8s.io Apr 12 18:58:53.313800 env[1645]: time="2024-04-12T18:58:53.313400325Z" level=info msg="cleaning up dead shim" Apr 12 18:58:53.329285 env[1645]: time="2024-04-12T18:58:53.327548802Z" level=warning msg="cleanup warnings time=\"2024-04-12T18:58:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4866 runtime=io.containerd.runc.v2\n" Apr 12 18:58:54.063685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a-rootfs.mount: Deactivated successfully. Apr 12 18:58:54.139301 env[1645]: time="2024-04-12T18:58:54.139252806Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 12 18:58:54.167874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780345153.mount: Deactivated successfully. Apr 12 18:58:54.183427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount143273545.mount: Deactivated successfully. Apr 12 18:58:54.193786 env[1645]: time="2024-04-12T18:58:54.193321330Z" level=info msg="CreateContainer within sandbox \"29bfef1041a8a99043bbccb90e875749611a4cc6db36fce9926bcf1629b3c373\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2\"" Apr 12 18:58:54.196489 env[1645]: time="2024-04-12T18:58:54.195051103Z" level=info msg="StartContainer for \"a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2\"" Apr 12 18:58:54.250336 systemd[1]: Started cri-containerd-a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2.scope. Apr 12 18:58:54.289999 env[1645]: time="2024-04-12T18:58:54.289942018Z" level=info msg="StartContainer for \"a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2\" returns successfully" Apr 12 18:58:54.482492 kubelet[2616]: E0412 18:58:54.482455 2616 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-dbnj5" podUID="c087baf4-492f-4b02-9df5-a6e609fa7bbb" Apr 12 18:58:54.745792 kubelet[2616]: W0412 18:58:54.745666 2616 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196efae0_2395_44f6_a96a_a8b838b9b6b5.slice/cri-containerd-cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532.scope WatchSource:0}: task cd0c8cd192030582eb86b2c09bfcb77436447ad5c6f22d6c519dc867fbd5c532 not found: not found Apr 12 18:58:55.238630 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 12 18:58:57.859626 kubelet[2616]: W0412 18:58:57.856448 2616 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196efae0_2395_44f6_a96a_a8b838b9b6b5.slice/cri-containerd-e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3.scope WatchSource:0}: task e6cb3eb8e2fad15bfc2ec3dbcc0291475dab05c505484713020e5a0bd9caa9c3 not found: not found Apr 12 18:58:58.895180 systemd-networkd[1459]: lxc_health: Link UP Apr 12 18:58:58.906103 (udev-worker)[5438]: Network interface NamePolicy= disabled on kernel command line. Apr 12 18:58:58.921983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Apr 12 18:58:58.921770 systemd-networkd[1459]: lxc_health: Gained carrier Apr 12 18:58:59.631763 systemd[1]: run-containerd-runc-k8s.io-a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2-runc.HfPVSk.mount: Deactivated successfully. Apr 12 18:59:00.529309 kubelet[2616]: I0412 18:59:00.529259 2616 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gcbvd" podStartSLOduration=10.529188931 podStartE2EDuration="10.529188931s" podCreationTimestamp="2024-04-12 18:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-12 18:58:55.163081998 +0000 UTC m=+131.017712615" watchObservedRunningTime="2024-04-12 18:59:00.529188931 +0000 UTC m=+136.383819547" Apr 12 18:59:00.721807 systemd-networkd[1459]: lxc_health: Gained IPv6LL Apr 12 18:59:00.982706 kubelet[2616]: W0412 18:59:00.982589 2616 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196efae0_2395_44f6_a96a_a8b838b9b6b5.slice/cri-containerd-5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a.scope WatchSource:0}: task 5547e3c6f7e1a09f4178a9726d346013e66f257c177c48d62678d09fb94e684a not found: not found Apr 12 18:59:02.223706 systemd[1]: run-containerd-runc-k8s.io-a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2-runc.ewMwuS.mount: Deactivated successfully. Apr 12 18:59:04.096164 kubelet[2616]: W0412 18:59:04.096120 2616 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196efae0_2395_44f6_a96a_a8b838b9b6b5.slice/cri-containerd-5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a.scope WatchSource:0}: task 5bae605fc71b0765782e46af6125d0a8f923678d450061807b01567edc21c46a not found: not found Apr 12 18:59:04.606795 systemd[1]: run-containerd-runc-k8s.io-a0b5e88c4fe0a02ebe8c2ee9dcc098591c62edf47da5a7ea031a9f7567ef38a2-runc.n8cjfM.mount: Deactivated successfully. Apr 12 18:59:04.761923 sshd[4530]: pam_unix(sshd:session): session closed for user core Apr 12 18:59:04.766202 systemd[1]: sshd@26-172.31.18.181:22-147.75.109.163:34494.service: Deactivated successfully. Apr 12 18:59:04.767909 systemd[1]: session-27.scope: Deactivated successfully. Apr 12 18:59:04.769428 systemd-logind[1636]: Session 27 logged out. Waiting for processes to exit. Apr 12 18:59:04.771490 systemd-logind[1636]: Removed session 27.