Feb 12 21:53:48.068444 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 21:53:48.068466 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:48.068476 kernel: BIOS-provided physical RAM map: Feb 12 21:53:48.068482 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 21:53:48.068488 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 21:53:48.068494 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 21:53:48.068504 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Feb 12 21:53:48.068510 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Feb 12 21:53:48.068517 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Feb 12 21:53:48.068523 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 21:53:48.068529 kernel: NX (Execute Disable) protection: active Feb 12 21:53:48.068536 kernel: SMBIOS 2.7 present. Feb 12 21:53:48.068542 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Feb 12 21:53:48.068548 kernel: Hypervisor detected: KVM Feb 12 21:53:48.068558 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 21:53:48.068565 kernel: kvm-clock: cpu 0, msr ffaa001, primary cpu clock Feb 12 21:53:48.068572 kernel: kvm-clock: using sched offset of 6799620358 cycles Feb 12 21:53:48.068580 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 21:53:48.068587 kernel: tsc: Detected 2499.994 MHz processor Feb 12 21:53:48.068594 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 21:53:48.068604 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 21:53:48.068610 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Feb 12 21:53:48.068617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 21:53:48.068624 kernel: Using GB pages for direct mapping Feb 12 21:53:48.068631 kernel: ACPI: Early table checksum verification disabled Feb 12 21:53:48.068638 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Feb 12 21:53:48.068645 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Feb 12 21:53:48.068652 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 21:53:48.068731 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Feb 12 21:53:48.068748 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Feb 12 21:53:48.068756 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:53:48.068763 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 21:53:48.068770 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Feb 12 21:53:48.068777 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 21:53:48.068783 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Feb 12 21:53:48.068791 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Feb 12 21:53:48.068797 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Feb 12 21:53:48.068807 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Feb 12 21:53:48.068814 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Feb 12 21:53:48.068821 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Feb 12 21:53:48.068832 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Feb 12 21:53:48.068839 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Feb 12 21:53:48.068846 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Feb 12 21:53:48.068854 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Feb 12 21:53:48.068864 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Feb 12 21:53:48.068871 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Feb 12 21:53:48.068878 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Feb 12 21:53:48.068886 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 12 21:53:48.068893 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 12 21:53:48.068901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Feb 12 21:53:48.068908 kernel: NUMA: Initialized distance table, cnt=1 Feb 12 21:53:48.068915 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Feb 12 21:53:48.068925 kernel: Zone ranges: Feb 12 21:53:48.068933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 21:53:48.068940 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Feb 12 21:53:48.068948 kernel: Normal empty Feb 12 21:53:48.068955 kernel: Movable zone start for each node Feb 12 21:53:48.068963 kernel: Early memory node ranges Feb 12 21:53:48.068970 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 21:53:48.068977 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Feb 12 21:53:48.068985 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Feb 12 21:53:48.068994 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 21:53:48.069001 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 21:53:48.069009 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Feb 12 21:53:48.069016 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 12 21:53:48.069024 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 21:53:48.069031 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Feb 12 21:53:48.069039 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 21:53:48.069046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 21:53:48.069054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 21:53:48.069063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 21:53:48.069071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 21:53:48.069078 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 21:53:48.069086 kernel: TSC deadline timer available Feb 12 21:53:48.069093 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 12 21:53:48.069101 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Feb 12 21:53:48.069108 kernel: Booting paravirtualized kernel on KVM Feb 12 21:53:48.069115 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 21:53:48.069123 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 12 21:53:48.069133 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 12 21:53:48.069141 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 12 21:53:48.069148 kernel: pcpu-alloc: [0] 0 1 Feb 12 21:53:48.069156 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Feb 12 21:53:48.069163 kernel: kvm-guest: PV spinlocks enabled Feb 12 21:53:48.069171 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 21:53:48.069178 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Feb 12 21:53:48.069186 kernel: Policy zone: DMA32 Feb 12 21:53:48.069194 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:48.069204 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 21:53:48.069212 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 21:53:48.069219 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 12 21:53:48.069227 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 21:53:48.069235 kernel: Memory: 1936476K/2057760K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 121024K reserved, 0K cma-reserved) Feb 12 21:53:48.069242 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 21:53:48.069249 kernel: Kernel/User page tables isolation: enabled Feb 12 21:53:48.069257 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 21:53:48.069276 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 21:53:48.069284 kernel: rcu: Hierarchical RCU implementation. Feb 12 21:53:48.069292 kernel: rcu: RCU event tracing is enabled. Feb 12 21:53:48.069299 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 21:53:48.069307 kernel: Rude variant of Tasks RCU enabled. Feb 12 21:53:48.069315 kernel: Tracing variant of Tasks RCU enabled. Feb 12 21:53:48.069322 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 21:53:48.069330 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 21:53:48.069338 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 12 21:53:48.069348 kernel: random: crng init done Feb 12 21:53:48.069355 kernel: Console: colour VGA+ 80x25 Feb 12 21:53:48.069363 kernel: printk: console [ttyS0] enabled Feb 12 21:53:48.069370 kernel: ACPI: Core revision 20210730 Feb 12 21:53:48.069378 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Feb 12 21:53:48.069386 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 21:53:48.069398 kernel: x2apic enabled Feb 12 21:53:48.069409 kernel: Switched APIC routing to physical x2apic. Feb 12 21:53:48.070368 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 12 21:53:48.070391 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Feb 12 21:53:48.070399 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 12 21:53:48.070406 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 12 21:53:48.070414 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 21:53:48.070430 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 21:53:48.070440 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 21:53:48.070447 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 21:53:48.070455 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 12 21:53:48.070463 kernel: RETBleed: Vulnerable Feb 12 21:53:48.070471 kernel: Speculative Store Bypass: Vulnerable Feb 12 21:53:48.070480 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:53:48.070487 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 12 21:53:48.070495 kernel: GDS: Unknown: Dependent on hypervisor status Feb 12 21:53:48.070503 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 21:53:48.070513 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 21:53:48.070521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 21:53:48.070529 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Feb 12 21:53:48.070537 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Feb 12 21:53:48.070545 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 12 21:53:48.070553 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 12 21:53:48.070563 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 12 21:53:48.070572 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Feb 12 21:53:48.070579 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 21:53:48.070587 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Feb 12 21:53:48.070595 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Feb 12 21:53:48.070624 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Feb 12 21:53:48.070632 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Feb 12 21:53:48.070640 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Feb 12 21:53:48.070648 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Feb 12 21:53:48.070656 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Feb 12 21:53:48.070664 kernel: Freeing SMP alternatives memory: 32K Feb 12 21:53:48.070674 kernel: pid_max: default: 32768 minimum: 301 Feb 12 21:53:48.070682 kernel: LSM: Security Framework initializing Feb 12 21:53:48.070690 kernel: SELinux: Initializing. Feb 12 21:53:48.070698 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:53:48.070706 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 12 21:53:48.070720 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 12 21:53:48.070728 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 12 21:53:48.070736 kernel: signal: max sigframe size: 3632 Feb 12 21:53:48.070745 kernel: rcu: Hierarchical SRCU implementation. Feb 12 21:53:48.070753 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 12 21:53:48.070764 kernel: smp: Bringing up secondary CPUs ... Feb 12 21:53:48.070772 kernel: x86: Booting SMP configuration: Feb 12 21:53:48.070780 kernel: .... node #0, CPUs: #1 Feb 12 21:53:48.070788 kernel: kvm-clock: cpu 1, msr ffaa041, secondary cpu clock Feb 12 21:53:48.070796 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Feb 12 21:53:48.070940 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 12 21:53:48.070954 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 12 21:53:48.070962 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 21:53:48.070971 kernel: smpboot: Max logical packages: 1 Feb 12 21:53:48.070982 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Feb 12 21:53:48.070991 kernel: devtmpfs: initialized Feb 12 21:53:48.070999 kernel: x86/mm: Memory block size: 128MB Feb 12 21:53:48.071007 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 21:53:48.071015 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 21:53:48.071023 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 21:53:48.071031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 21:53:48.071039 kernel: audit: initializing netlink subsys (disabled) Feb 12 21:53:48.071047 kernel: audit: type=2000 audit(1707774826.641:1): state=initialized audit_enabled=0 res=1 Feb 12 21:53:48.071058 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 21:53:48.071066 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 21:53:48.071074 kernel: cpuidle: using governor menu Feb 12 21:53:48.071083 kernel: ACPI: bus type PCI registered Feb 12 21:53:48.071091 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 21:53:48.071099 kernel: dca service started, version 1.12.1 Feb 12 21:53:48.071107 kernel: PCI: Using configuration type 1 for base access Feb 12 21:53:48.071115 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 21:53:48.071123 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 21:53:48.071140 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 21:53:48.071148 kernel: ACPI: Added _OSI(Module Device) Feb 12 21:53:48.071156 kernel: ACPI: Added _OSI(Processor Device) Feb 12 21:53:48.071164 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 21:53:48.071172 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 21:53:48.071180 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 21:53:48.071188 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 21:53:48.071196 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 21:53:48.071204 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 12 21:53:48.071215 kernel: ACPI: Interpreter enabled Feb 12 21:53:48.071224 kernel: ACPI: PM: (supports S0 S5) Feb 12 21:53:48.071232 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 21:53:48.071240 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 21:53:48.071248 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 12 21:53:48.071256 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 21:53:48.071429 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 12 21:53:48.071520 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Feb 12 21:53:48.071534 kernel: acpiphp: Slot [3] registered Feb 12 21:53:48.071542 kernel: acpiphp: Slot [4] registered Feb 12 21:53:48.071550 kernel: acpiphp: Slot [5] registered Feb 12 21:53:48.071558 kernel: acpiphp: Slot [6] registered Feb 12 21:53:48.071566 kernel: acpiphp: Slot [7] registered Feb 12 21:53:48.071574 kernel: acpiphp: Slot [8] registered Feb 12 21:53:48.071582 kernel: acpiphp: Slot [9] registered Feb 12 21:53:48.071591 kernel: acpiphp: Slot [10] registered Feb 12 21:53:48.071599 kernel: acpiphp: Slot [11] registered Feb 12 21:53:48.071609 kernel: acpiphp: Slot [12] registered Feb 12 21:53:48.071617 kernel: acpiphp: Slot [13] registered Feb 12 21:53:48.071626 kernel: acpiphp: Slot [14] registered Feb 12 21:53:48.071634 kernel: acpiphp: Slot [15] registered Feb 12 21:53:48.071642 kernel: acpiphp: Slot [16] registered Feb 12 21:53:48.071649 kernel: acpiphp: Slot [17] registered Feb 12 21:53:48.071657 kernel: acpiphp: Slot [18] registered Feb 12 21:53:48.071665 kernel: acpiphp: Slot [19] registered Feb 12 21:53:48.071673 kernel: acpiphp: Slot [20] registered Feb 12 21:53:48.071684 kernel: acpiphp: Slot [21] registered Feb 12 21:53:48.071692 kernel: acpiphp: Slot [22] registered Feb 12 21:53:48.071700 kernel: acpiphp: Slot [23] registered Feb 12 21:53:48.071708 kernel: acpiphp: Slot [24] registered Feb 12 21:53:48.071716 kernel: acpiphp: Slot [25] registered Feb 12 21:53:48.071725 kernel: acpiphp: Slot [26] registered Feb 12 21:53:48.071733 kernel: acpiphp: Slot [27] registered Feb 12 21:53:48.071741 kernel: acpiphp: Slot [28] registered Feb 12 21:53:48.071749 kernel: acpiphp: Slot [29] registered Feb 12 21:53:48.071757 kernel: acpiphp: Slot [30] registered Feb 12 21:53:48.071768 kernel: acpiphp: Slot [31] registered Feb 12 21:53:48.071776 kernel: PCI host bridge to bus 0000:00 Feb 12 21:53:48.071865 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 21:53:48.071946 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 21:53:48.072114 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 21:53:48.072194 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 12 21:53:48.072280 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 21:53:48.072388 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 21:53:48.072483 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 21:53:48.072576 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Feb 12 21:53:48.072661 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 12 21:53:48.072746 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 12 21:53:48.072829 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Feb 12 21:53:48.072913 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Feb 12 21:53:48.073190 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Feb 12 21:53:48.073291 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Feb 12 21:53:48.073374 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Feb 12 21:53:48.073457 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Feb 12 21:53:48.073549 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Feb 12 21:53:48.073745 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Feb 12 21:53:48.073901 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 12 21:53:48.076435 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 21:53:48.076692 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 21:53:48.076796 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Feb 12 21:53:48.076890 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 21:53:48.076976 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Feb 12 21:53:48.076987 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 21:53:48.077003 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 21:53:48.077011 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 21:53:48.077020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 21:53:48.077028 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 21:53:48.077037 kernel: iommu: Default domain type: Translated Feb 12 21:53:48.077045 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 21:53:48.077132 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Feb 12 21:53:48.077426 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 21:53:48.077514 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Feb 12 21:53:48.077529 kernel: vgaarb: loaded Feb 12 21:53:48.077538 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 21:53:48.077547 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 21:53:48.077555 kernel: PTP clock support registered Feb 12 21:53:48.077563 kernel: PCI: Using ACPI for IRQ routing Feb 12 21:53:48.077572 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 21:53:48.077580 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 21:53:48.077589 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Feb 12 21:53:48.077597 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Feb 12 21:53:48.077607 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Feb 12 21:53:48.077692 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 21:53:48.077703 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 21:53:48.077712 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 21:53:48.077720 kernel: pnp: PnP ACPI init Feb 12 21:53:48.077728 kernel: pnp: PnP ACPI: found 5 devices Feb 12 21:53:48.077736 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 21:53:48.077744 kernel: NET: Registered PF_INET protocol family Feb 12 21:53:48.077756 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 21:53:48.077764 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 12 21:53:48.077772 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 21:53:48.077781 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 12 21:53:48.077789 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Feb 12 21:53:48.077797 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 12 21:53:48.077806 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:53:48.077814 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 12 21:53:48.077822 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 21:53:48.077832 kernel: NET: Registered PF_XDP protocol family Feb 12 21:53:48.077928 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 21:53:48.078005 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 21:53:48.078079 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 21:53:48.078155 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 12 21:53:48.078243 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 21:53:48.078398 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 21:53:48.078411 kernel: PCI: CLS 0 bytes, default 64 Feb 12 21:53:48.078424 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 12 21:53:48.078432 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Feb 12 21:53:48.078440 kernel: clocksource: Switched to clocksource tsc Feb 12 21:53:48.078449 kernel: Initialise system trusted keyrings Feb 12 21:53:48.078457 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 12 21:53:48.078465 kernel: Key type asymmetric registered Feb 12 21:53:48.078473 kernel: Asymmetric key parser 'x509' registered Feb 12 21:53:48.078481 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 21:53:48.078492 kernel: io scheduler mq-deadline registered Feb 12 21:53:48.078501 kernel: io scheduler kyber registered Feb 12 21:53:48.078509 kernel: io scheduler bfq registered Feb 12 21:53:48.078517 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 21:53:48.078525 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 21:53:48.078534 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 21:53:48.078542 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 21:53:48.078550 kernel: i8042: Warning: Keylock active Feb 12 21:53:48.078558 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 21:53:48.078566 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 21:53:48.078665 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 12 21:53:48.078876 kernel: rtc_cmos 00:00: registered as rtc0 Feb 12 21:53:48.079020 kernel: rtc_cmos 00:00: setting system clock to 2024-02-12T21:53:47 UTC (1707774827) Feb 12 21:53:48.079101 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 12 21:53:48.079112 kernel: intel_pstate: CPU model not supported Feb 12 21:53:48.079120 kernel: NET: Registered PF_INET6 protocol family Feb 12 21:53:48.079128 kernel: Segment Routing with IPv6 Feb 12 21:53:48.079141 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 21:53:48.079149 kernel: NET: Registered PF_PACKET protocol family Feb 12 21:53:48.079158 kernel: Key type dns_resolver registered Feb 12 21:53:48.079166 kernel: IPI shorthand broadcast: enabled Feb 12 21:53:48.079174 kernel: sched_clock: Marking stable (415400946, 275640738)->(841981975, -150940291) Feb 12 21:53:48.079183 kernel: registered taskstats version 1 Feb 12 21:53:48.079191 kernel: Loading compiled-in X.509 certificates Feb 12 21:53:48.079200 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 21:53:48.079207 kernel: Key type .fscrypt registered Feb 12 21:53:48.079218 kernel: Key type fscrypt-provisioning registered Feb 12 21:53:48.079226 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 21:53:48.079234 kernel: ima: Allocated hash algorithm: sha1 Feb 12 21:53:48.079242 kernel: ima: No architecture policies found Feb 12 21:53:48.079251 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 21:53:48.079259 kernel: Write protecting the kernel read-only data: 28672k Feb 12 21:53:48.079277 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 21:53:48.079286 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 21:53:48.079294 kernel: Run /init as init process Feb 12 21:53:48.079305 kernel: with arguments: Feb 12 21:53:48.079313 kernel: /init Feb 12 21:53:48.079321 kernel: with environment: Feb 12 21:53:48.079329 kernel: HOME=/ Feb 12 21:53:48.079337 kernel: TERM=linux Feb 12 21:53:48.079345 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 21:53:48.079356 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:53:48.079367 systemd[1]: Detected virtualization amazon. Feb 12 21:53:48.079378 systemd[1]: Detected architecture x86-64. Feb 12 21:53:48.079387 systemd[1]: Running in initrd. Feb 12 21:53:48.079395 systemd[1]: No hostname configured, using default hostname. Feb 12 21:53:48.079406 systemd[1]: Hostname set to . Feb 12 21:53:48.079516 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:53:48.079529 systemd[1]: Queued start job for default target initrd.target. Feb 12 21:53:48.079538 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:53:48.079547 systemd[1]: Reached target cryptsetup.target. Feb 12 21:53:48.079556 systemd[1]: Reached target paths.target. Feb 12 21:53:48.079565 systemd[1]: Reached target slices.target. Feb 12 21:53:48.079574 systemd[1]: Reached target swap.target. Feb 12 21:53:48.079583 systemd[1]: Reached target timers.target. Feb 12 21:53:48.079592 systemd[1]: Listening on iscsid.socket. Feb 12 21:53:48.079604 systemd[1]: Listening on iscsiuio.socket. Feb 12 21:53:48.079612 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:53:48.079623 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:53:48.079922 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:53:48.079933 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 21:53:48.079942 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:53:48.079952 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:53:48.079969 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:53:48.079983 systemd[1]: Reached target sockets.target. Feb 12 21:53:48.079996 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:53:48.080004 systemd[1]: Finished network-cleanup.service. Feb 12 21:53:48.080013 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 21:53:48.080022 systemd[1]: Starting systemd-journald.service... Feb 12 21:53:48.080031 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:53:48.080040 systemd[1]: Starting systemd-resolved.service... Feb 12 21:53:48.080054 systemd-journald[185]: Journal started Feb 12 21:53:48.080115 systemd-journald[185]: Runtime Journal (/run/log/journal/ec28fe2624bd710d9ca98f37fa2abde7) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:53:48.115313 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 21:53:48.115380 systemd[1]: Started systemd-journald.service. Feb 12 21:53:48.092114 systemd-modules-load[186]: Inserted module 'overlay' Feb 12 21:53:48.243040 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 21:53:48.243078 kernel: Bridge firewalling registered Feb 12 21:53:48.243098 kernel: SCSI subsystem initialized Feb 12 21:53:48.243113 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 21:53:48.243130 kernel: device-mapper: uevent: version 1.0.3 Feb 12 21:53:48.243150 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 21:53:48.137410 systemd-resolved[187]: Positive Trust Anchors: Feb 12 21:53:48.137425 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:53:48.251350 kernel: audit: type=1130 audit(1707774828.242:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.251386 kernel: audit: type=1130 audit(1707774828.244:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.137473 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:53:48.141797 systemd-resolved[187]: Defaulting to hostname 'linux'. Feb 12 21:53:48.156236 systemd-modules-load[186]: Inserted module 'br_netfilter' Feb 12 21:53:48.270156 kernel: audit: type=1130 audit(1707774828.264:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.201510 systemd-modules-load[186]: Inserted module 'dm_multipath' Feb 12 21:53:48.243542 systemd[1]: Started systemd-resolved.service. Feb 12 21:53:48.245427 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:53:48.270301 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 21:53:48.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.279847 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:53:48.290775 kernel: audit: type=1130 audit(1707774828.279:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.290824 kernel: audit: type=1130 audit(1707774828.285:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.291000 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 21:53:48.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.293391 systemd[1]: Reached target nss-lookup.target. Feb 12 21:53:48.300483 kernel: audit: type=1130 audit(1707774828.292:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.301419 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 21:53:48.304435 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:53:48.305857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:53:48.325106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:53:48.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.331285 kernel: audit: type=1130 audit(1707774828.325:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.334689 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:53:48.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.341297 kernel: audit: type=1130 audit(1707774828.334:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.351146 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 21:53:48.352612 systemd[1]: Starting dracut-cmdline.service... Feb 12 21:53:48.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.361316 kernel: audit: type=1130 audit(1707774828.350:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.366178 dracut-cmdline[207]: dracut-dracut-053 Feb 12 21:53:48.369153 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 21:53:48.441295 kernel: Loading iSCSI transport class v2.0-870. Feb 12 21:53:48.455289 kernel: iscsi: registered transport (tcp) Feb 12 21:53:48.482823 kernel: iscsi: registered transport (qla4xxx) Feb 12 21:53:48.482900 kernel: QLogic iSCSI HBA Driver Feb 12 21:53:48.522297 systemd[1]: Finished dracut-cmdline.service. Feb 12 21:53:48.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:48.523905 systemd[1]: Starting dracut-pre-udev.service... Feb 12 21:53:48.585685 kernel: raid6: avx512x4 gen() 15290 MB/s Feb 12 21:53:48.601329 kernel: raid6: avx512x4 xor() 6093 MB/s Feb 12 21:53:48.619309 kernel: raid6: avx512x2 gen() 15580 MB/s Feb 12 21:53:48.636318 kernel: raid6: avx512x2 xor() 20466 MB/s Feb 12 21:53:48.653317 kernel: raid6: avx512x1 gen() 15092 MB/s Feb 12 21:53:48.671372 kernel: raid6: avx512x1 xor() 18762 MB/s Feb 12 21:53:48.693322 kernel: raid6: avx2x4 gen() 13957 MB/s Feb 12 21:53:48.714438 kernel: raid6: avx2x4 xor() 2443 MB/s Feb 12 21:53:48.732322 kernel: raid6: avx2x2 gen() 7365 MB/s Feb 12 21:53:48.749391 kernel: raid6: avx2x2 xor() 15307 MB/s Feb 12 21:53:48.767313 kernel: raid6: avx2x1 gen() 11603 MB/s Feb 12 21:53:48.785317 kernel: raid6: avx2x1 xor() 13634 MB/s Feb 12 21:53:48.802316 kernel: raid6: sse2x4 gen() 8380 MB/s Feb 12 21:53:48.819318 kernel: raid6: sse2x4 xor() 5277 MB/s Feb 12 21:53:48.837311 kernel: raid6: sse2x2 gen() 9578 MB/s Feb 12 21:53:48.855316 kernel: raid6: sse2x2 xor() 5458 MB/s Feb 12 21:53:48.872321 kernel: raid6: sse2x1 gen() 8151 MB/s Feb 12 21:53:48.890444 kernel: raid6: sse2x1 xor() 4175 MB/s Feb 12 21:53:48.890516 kernel: raid6: using algorithm avx512x2 gen() 15580 MB/s Feb 12 21:53:48.890535 kernel: raid6: .... xor() 20466 MB/s, rmw enabled Feb 12 21:53:48.891503 kernel: raid6: using avx512x2 recovery algorithm Feb 12 21:53:48.908293 kernel: xor: automatically using best checksumming function avx Feb 12 21:53:49.028294 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 21:53:49.038514 systemd[1]: Finished dracut-pre-udev.service. Feb 12 21:53:49.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:49.041000 audit: BPF prog-id=7 op=LOAD Feb 12 21:53:49.041000 audit: BPF prog-id=8 op=LOAD Feb 12 21:53:49.041852 systemd[1]: Starting systemd-udevd.service... Feb 12 21:53:49.057360 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 12 21:53:49.064196 systemd[1]: Started systemd-udevd.service. Feb 12 21:53:49.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:49.066451 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 21:53:49.085256 dracut-pre-trigger[389]: rd.md=0: removing MD RAID activation Feb 12 21:53:49.135782 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 21:53:49.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:49.136896 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:53:49.184185 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:53:49.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:49.254284 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 21:53:49.274657 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 21:53:49.274911 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 21:53:49.275171 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Feb 12 21:53:49.288286 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a9:00:43:ba:3b Feb 12 21:53:49.289380 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 21:53:49.289424 kernel: AES CTR mode by8 optimization enabled Feb 12 21:53:49.295813 (udev-worker)[430]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:53:49.490290 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 21:53:49.490553 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 12 21:53:49.490575 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 21:53:49.490725 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 21:53:49.490746 kernel: GPT:9289727 != 16777215 Feb 12 21:53:49.490765 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 21:53:49.490788 kernel: GPT:9289727 != 16777215 Feb 12 21:53:49.490804 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 21:53:49.490820 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:53:49.490842 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Feb 12 21:53:49.438557 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 21:53:49.502422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 21:53:49.519889 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 21:53:49.522806 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 21:53:49.553555 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 21:53:49.557962 systemd[1]: Starting disk-uuid.service... Feb 12 21:53:49.570770 disk-uuid[594]: Primary Header is updated. Feb 12 21:53:49.570770 disk-uuid[594]: Secondary Entries is updated. Feb 12 21:53:49.570770 disk-uuid[594]: Secondary Header is updated. Feb 12 21:53:49.578484 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:53:49.583281 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:53:50.598052 disk-uuid[595]: The operation has completed successfully. Feb 12 21:53:50.599611 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 21:53:50.731531 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 21:53:50.731642 systemd[1]: Finished disk-uuid.service. Feb 12 21:53:50.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:50.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:50.744174 systemd[1]: Starting verity-setup.service... Feb 12 21:53:50.771340 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 12 21:53:50.857922 systemd[1]: Found device dev-mapper-usr.device. Feb 12 21:53:50.860346 systemd[1]: Finished verity-setup.service. Feb 12 21:53:50.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:50.863374 systemd[1]: Mounting sysusr-usr.mount... Feb 12 21:53:51.010132 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 21:53:51.010882 systemd[1]: Mounted sysusr-usr.mount. Feb 12 21:53:51.013907 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 21:53:51.017168 systemd[1]: Starting ignition-setup.service... Feb 12 21:53:51.020482 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 21:53:51.050793 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:53:51.050857 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:53:51.050876 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:53:51.061293 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:53:51.077077 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 21:53:51.121936 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 21:53:51.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.124000 audit: BPF prog-id=9 op=LOAD Feb 12 21:53:51.126782 systemd[1]: Starting systemd-networkd.service... Feb 12 21:53:51.152505 systemd[1]: Finished ignition-setup.service. Feb 12 21:53:51.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.157529 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 21:53:51.162202 systemd-networkd[1106]: lo: Link UP Feb 12 21:53:51.162208 systemd-networkd[1106]: lo: Gained carrier Feb 12 21:53:51.163304 systemd-networkd[1106]: Enumeration completed Feb 12 21:53:51.163573 systemd-networkd[1106]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:53:51.168181 systemd[1]: Started systemd-networkd.service. Feb 12 21:53:51.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.170329 systemd[1]: Reached target network.target. Feb 12 21:53:51.172406 systemd[1]: Starting iscsiuio.service... Feb 12 21:53:51.177450 systemd-networkd[1106]: eth0: Link UP Feb 12 21:53:51.177542 systemd-networkd[1106]: eth0: Gained carrier Feb 12 21:53:51.180031 systemd[1]: Started iscsiuio.service. Feb 12 21:53:51.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.183005 systemd[1]: Starting iscsid.service... Feb 12 21:53:51.188443 iscsid[1113]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:53:51.188443 iscsid[1113]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 12 21:53:51.188443 iscsid[1113]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 21:53:51.188443 iscsid[1113]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 21:53:51.188443 iscsid[1113]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 21:53:51.188443 iscsid[1113]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 21:53:51.188443 iscsid[1113]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 21:53:51.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.190773 systemd[1]: Started iscsid.service. Feb 12 21:53:51.200063 systemd-networkd[1106]: eth0: DHCPv4 address 172.31.30.174/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:53:51.210565 systemd[1]: Starting dracut-initqueue.service... Feb 12 21:53:51.225963 systemd[1]: Finished dracut-initqueue.service. Feb 12 21:53:51.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.227328 systemd[1]: Reached target remote-fs-pre.target. Feb 12 21:53:51.229039 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:53:51.230258 systemd[1]: Reached target remote-fs.target. Feb 12 21:53:51.232285 systemd[1]: Starting dracut-pre-mount.service... Feb 12 21:53:51.246436 systemd[1]: Finished dracut-pre-mount.service. Feb 12 21:53:51.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.843859 ignition[1109]: Ignition 2.14.0 Feb 12 21:53:51.843873 ignition[1109]: Stage: fetch-offline Feb 12 21:53:51.844056 ignition[1109]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:51.844104 ignition[1109]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:51.863211 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:51.865021 ignition[1109]: Ignition finished successfully Feb 12 21:53:51.866780 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 21:53:51.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.869364 systemd[1]: Starting ignition-fetch.service... Feb 12 21:53:51.879289 ignition[1132]: Ignition 2.14.0 Feb 12 21:53:51.879300 ignition[1132]: Stage: fetch Feb 12 21:53:51.879441 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:51.879472 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:51.887429 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:51.888870 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:51.899899 ignition[1132]: INFO : PUT result: OK Feb 12 21:53:51.902153 ignition[1132]: DEBUG : parsed url from cmdline: "" Feb 12 21:53:51.902153 ignition[1132]: INFO : no config URL provided Feb 12 21:53:51.902153 ignition[1132]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 21:53:51.906053 ignition[1132]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 21:53:51.906053 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:51.906053 ignition[1132]: INFO : PUT result: OK Feb 12 21:53:51.906053 ignition[1132]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 21:53:51.911082 ignition[1132]: INFO : GET result: OK Feb 12 21:53:51.912212 ignition[1132]: DEBUG : parsing config with SHA512: d71472145e94ce6a3f2415803f681bc3764fb6fdfcb497b98bd71a541c8d50b6d3165032dfa60a05a7c4b0afd74ed54d01ee117172a09eb72660e6860f156bf0 Feb 12 21:53:51.963198 unknown[1132]: fetched base config from "system" Feb 12 21:53:51.963410 unknown[1132]: fetched base config from "system" Feb 12 21:53:51.963419 unknown[1132]: fetched user config from "aws" Feb 12 21:53:51.967048 ignition[1132]: fetch: fetch complete Feb 12 21:53:51.967054 ignition[1132]: fetch: fetch passed Feb 12 21:53:51.967121 ignition[1132]: Ignition finished successfully Feb 12 21:53:51.969249 systemd[1]: Finished ignition-fetch.service. Feb 12 21:53:51.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:51.973745 systemd[1]: Starting ignition-kargs.service... Feb 12 21:53:51.983887 ignition[1138]: Ignition 2.14.0 Feb 12 21:53:51.983898 ignition[1138]: Stage: kargs Feb 12 21:53:51.984125 ignition[1138]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:51.984149 ignition[1138]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:51.995323 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:51.997779 ignition[1138]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:52.005545 ignition[1138]: INFO : PUT result: OK Feb 12 21:53:52.012336 ignition[1138]: kargs: kargs passed Feb 12 21:53:52.012415 ignition[1138]: Ignition finished successfully Feb 12 21:53:52.014908 systemd[1]: Finished ignition-kargs.service. Feb 12 21:53:52.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.019294 systemd[1]: Starting ignition-disks.service... Feb 12 21:53:52.029817 ignition[1144]: Ignition 2.14.0 Feb 12 21:53:52.029830 ignition[1144]: Stage: disks Feb 12 21:53:52.030036 ignition[1144]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:52.030068 ignition[1144]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:52.040153 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:52.041586 ignition[1144]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:52.044059 ignition[1144]: INFO : PUT result: OK Feb 12 21:53:52.048136 ignition[1144]: disks: disks passed Feb 12 21:53:52.048210 ignition[1144]: Ignition finished successfully Feb 12 21:53:52.050657 systemd[1]: Finished ignition-disks.service. Feb 12 21:53:52.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.053254 systemd[1]: Reached target initrd-root-device.target. Feb 12 21:53:52.055293 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:53:52.057140 systemd[1]: Reached target local-fs.target. Feb 12 21:53:52.059157 systemd[1]: Reached target sysinit.target. Feb 12 21:53:52.061371 systemd[1]: Reached target basic.target. Feb 12 21:53:52.065196 systemd[1]: Starting systemd-fsck-root.service... Feb 12 21:53:52.109783 systemd-fsck[1152]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 21:53:52.115126 systemd[1]: Finished systemd-fsck-root.service. Feb 12 21:53:52.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.118947 systemd[1]: Mounting sysroot.mount... Feb 12 21:53:52.161528 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 21:53:52.161434 systemd[1]: Mounted sysroot.mount. Feb 12 21:53:52.164509 systemd[1]: Reached target initrd-root-fs.target. Feb 12 21:53:52.176619 systemd[1]: Mounting sysroot-usr.mount... Feb 12 21:53:52.179391 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 21:53:52.179458 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 21:53:52.179496 systemd[1]: Reached target ignition-diskful.target. Feb 12 21:53:52.185431 systemd[1]: Mounted sysroot-usr.mount. Feb 12 21:53:52.197642 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:53:52.209670 systemd[1]: Starting initrd-setup-root.service... Feb 12 21:53:52.226454 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1169) Feb 12 21:53:52.231967 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:53:52.232028 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:53:52.232047 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:53:52.239290 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:53:52.241722 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:53:52.262833 initrd-setup-root[1174]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 21:53:52.285614 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Feb 12 21:53:52.291117 initrd-setup-root[1208]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 21:53:52.297119 initrd-setup-root[1216]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 21:53:52.486404 systemd-networkd[1106]: eth0: Gained IPv6LL Feb 12 21:53:52.498804 systemd[1]: Finished initrd-setup-root.service. Feb 12 21:53:52.515012 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 21:53:52.515046 kernel: audit: type=1130 audit(1707774832.498:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.500384 systemd[1]: Starting ignition-mount.service... Feb 12 21:53:52.520282 systemd[1]: Starting sysroot-boot.service... Feb 12 21:53:52.523080 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 21:53:52.523206 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 21:53:52.551813 ignition[1234]: INFO : Ignition 2.14.0 Feb 12 21:53:52.553515 ignition[1234]: INFO : Stage: mount Feb 12 21:53:52.555291 ignition[1234]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:52.557829 ignition[1234]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:52.573782 ignition[1234]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:52.575575 ignition[1234]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:52.577813 systemd[1]: Finished sysroot-boot.service. Feb 12 21:53:52.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.586285 kernel: audit: type=1130 audit(1707774832.579:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.586366 ignition[1234]: INFO : PUT result: OK Feb 12 21:53:52.590837 ignition[1234]: INFO : mount: mount passed Feb 12 21:53:52.592668 ignition[1234]: INFO : Ignition finished successfully Feb 12 21:53:52.595506 systemd[1]: Finished ignition-mount.service. Feb 12 21:53:52.596846 systemd[1]: Starting ignition-files.service... Feb 12 21:53:52.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.607320 kernel: audit: type=1130 audit(1707774832.595:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:52.611058 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 21:53:52.628296 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1245) Feb 12 21:53:52.632138 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Feb 12 21:53:52.632201 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 21:53:52.632219 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 21:53:52.640285 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 21:53:52.643888 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 21:53:52.658897 ignition[1264]: INFO : Ignition 2.14.0 Feb 12 21:53:52.658897 ignition[1264]: INFO : Stage: files Feb 12 21:53:52.660868 ignition[1264]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:52.660868 ignition[1264]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:52.671725 ignition[1264]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:52.673579 ignition[1264]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:52.675434 ignition[1264]: INFO : PUT result: OK Feb 12 21:53:52.681095 ignition[1264]: DEBUG : files: compiled without relabeling support, skipping Feb 12 21:53:52.686838 ignition[1264]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 21:53:52.686838 ignition[1264]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 21:53:52.694311 ignition[1264]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 21:53:52.696212 ignition[1264]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 21:53:52.698845 unknown[1264]: wrote ssh authorized keys file for user: core Feb 12 21:53:52.700258 ignition[1264]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 21:53:52.702782 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:53:52.705094 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 21:53:52.705094 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 21:53:52.705094 ignition[1264]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 21:53:52.785969 ignition[1264]: INFO : GET result: OK Feb 12 21:53:52.883446 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 21:53:52.887710 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:53:52.887710 ignition[1264]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 12 21:53:53.315906 ignition[1264]: INFO : GET result: OK Feb 12 21:53:53.457402 ignition[1264]: DEBUG : file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 12 21:53:53.461257 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 12 21:53:53.461257 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:53:53.461257 ignition[1264]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 12 21:53:53.961505 ignition[1264]: INFO : GET result: OK Feb 12 21:53:54.134007 ignition[1264]: DEBUG : file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 12 21:53:54.136921 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 12 21:53:54.136921 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:53:54.136921 ignition[1264]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 12 21:53:54.253928 ignition[1264]: INFO : GET result: OK Feb 12 21:53:55.052355 ignition[1264]: DEBUG : file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 12 21:53:55.052355 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 21:53:55.052355 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 21:53:55.052355 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 21:53:55.052355 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:53:55.052355 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:53:55.074834 ignition[1264]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774635620" Feb 12 21:53:55.089941 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1268) Feb 12 21:53:55.089978 ignition[1264]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774635620": device or resource busy Feb 12 21:53:55.089978 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1774635620", trying btrfs: device or resource busy Feb 12 21:53:55.089978 ignition[1264]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774635620" Feb 12 21:53:55.089978 ignition[1264]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1774635620" Feb 12 21:53:55.109326 ignition[1264]: INFO : op(3): [started] unmounting "/mnt/oem1774635620" Feb 12 21:53:55.111923 systemd[1]: mnt-oem1774635620.mount: Deactivated successfully. Feb 12 21:53:55.113879 ignition[1264]: INFO : op(3): [finished] unmounting "/mnt/oem1774635620" Feb 12 21:53:55.115192 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 21:53:55.115192 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:53:55.120071 ignition[1264]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 12 21:53:55.185204 ignition[1264]: INFO : GET result: OK Feb 12 21:53:55.442708 ignition[1264]: DEBUG : file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 12 21:53:55.446483 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 21:53:55.473624 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:53:55.473624 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 21:53:55.473624 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 21:53:55.473624 ignition[1264]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 12 21:53:55.538000 ignition[1264]: INFO : GET result: OK Feb 12 21:53:55.796384 ignition[1264]: DEBUG : file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 21:53:55.800500 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 21:53:55.800500 ignition[1264]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 21:53:56.251207 ignition[1264]: INFO : GET result: OK Feb 12 21:53:56.361257 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 21:53:56.363780 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/install.sh" Feb 12 21:53:56.366081 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 21:53:56.369741 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 21:53:56.373534 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 21:53:56.377054 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:53:56.380496 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:53:56.390124 ignition[1264]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023897534" Feb 12 21:53:56.392741 ignition[1264]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023897534": device or resource busy Feb 12 21:53:56.392741 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4023897534", trying btrfs: device or resource busy Feb 12 21:53:56.392741 ignition[1264]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023897534" Feb 12 21:53:56.399815 ignition[1264]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4023897534" Feb 12 21:53:56.399815 ignition[1264]: INFO : op(6): [started] unmounting "/mnt/oem4023897534" Feb 12 21:53:56.399815 ignition[1264]: INFO : op(6): [finished] unmounting "/mnt/oem4023897534" Feb 12 21:53:56.399815 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 21:53:56.399815 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:53:56.399815 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:53:56.396705 systemd[1]: mnt-oem4023897534.mount: Deactivated successfully. Feb 12 21:53:56.416237 ignition[1264]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2506215307" Feb 12 21:53:56.418123 ignition[1264]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2506215307": device or resource busy Feb 12 21:53:56.418123 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2506215307", trying btrfs: device or resource busy Feb 12 21:53:56.427546 ignition[1264]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2506215307" Feb 12 21:53:56.427546 ignition[1264]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2506215307" Feb 12 21:53:56.436008 ignition[1264]: INFO : op(9): [started] unmounting "/mnt/oem2506215307" Feb 12 21:53:56.436008 ignition[1264]: INFO : op(9): [finished] unmounting "/mnt/oem2506215307" Feb 12 21:53:56.436008 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 21:53:56.436008 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:53:56.436008 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 21:53:56.455537 systemd[1]: mnt-oem2506215307.mount: Deactivated successfully. Feb 12 21:53:56.468945 ignition[1264]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4176723627" Feb 12 21:53:56.471189 ignition[1264]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4176723627": device or resource busy Feb 12 21:53:56.471189 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4176723627", trying btrfs: device or resource busy Feb 12 21:53:56.471189 ignition[1264]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4176723627" Feb 12 21:53:56.482656 ignition[1264]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4176723627" Feb 12 21:53:56.482656 ignition[1264]: INFO : op(c): [started] unmounting "/mnt/oem4176723627" Feb 12 21:53:56.482656 ignition[1264]: INFO : op(c): [finished] unmounting "/mnt/oem4176723627" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(15): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(15): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(16): [started] processing unit "amazon-ssm-agent.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(16): op(17): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(16): op(17): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(16): [finished] processing unit "amazon-ssm-agent.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(19): [started] processing unit "containerd.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(19): op(1a): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(19): op(1a): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(19): [finished] processing unit "containerd.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(1b): [started] processing unit "prepare-cni-plugins.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 21:53:56.482656 ignition[1264]: INFO : files: op(1b): [finished] processing unit "prepare-cni-plugins.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1d): [started] processing unit "prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1d): [finished] processing unit "prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(22): [started] setting preset to enabled for "prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(24): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(24): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(25): [started] setting preset to enabled for "nvidia.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(25): [finished] setting preset to enabled for "nvidia.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(26): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:53:56.525926 ignition[1264]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 21:53:56.589400 kernel: audit: type=1130 audit(1707774836.546:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.540372 systemd[1]: Finished ignition-files.service. Feb 12 21:53:56.590684 ignition[1264]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:53:56.590684 ignition[1264]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 21:53:56.590684 ignition[1264]: INFO : files: files passed Feb 12 21:53:56.590684 ignition[1264]: INFO : Ignition finished successfully Feb 12 21:53:56.621518 kernel: audit: type=1130 audit(1707774836.596:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.621582 kernel: audit: type=1131 audit(1707774836.596:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.621612 kernel: audit: type=1130 audit(1707774836.606:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.552210 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 21:53:56.560910 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 21:53:56.627788 initrd-setup-root-after-ignition[1289]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 21:53:56.566710 systemd[1]: Starting ignition-quench.service... Feb 12 21:53:56.587072 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 21:53:56.587175 systemd[1]: Finished ignition-quench.service. Feb 12 21:53:56.596827 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 21:53:56.607127 systemd[1]: Reached target ignition-complete.target. Feb 12 21:53:56.623210 systemd[1]: Starting initrd-parse-etc.service... Feb 12 21:53:56.668507 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 21:53:56.668627 systemd[1]: Finished initrd-parse-etc.service. Feb 12 21:53:56.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.672174 systemd[1]: Reached target initrd-fs.target. Feb 12 21:53:56.692993 kernel: audit: type=1130 audit(1707774836.671:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.693034 kernel: audit: type=1131 audit(1707774836.671:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.693873 systemd[1]: Reached target initrd.target. Feb 12 21:53:56.696348 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 21:53:56.700252 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 21:53:56.721880 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 21:53:56.732434 kernel: audit: type=1130 audit(1707774836.721:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.723647 systemd[1]: Starting initrd-cleanup.service... Feb 12 21:53:56.746338 systemd[1]: Stopped target nss-lookup.target. Feb 12 21:53:56.748482 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 21:53:56.751069 systemd[1]: Stopped target timers.target. Feb 12 21:53:56.753357 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 21:53:56.754439 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 21:53:56.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.757853 systemd[1]: Stopped target initrd.target. Feb 12 21:53:56.759545 systemd[1]: Stopped target basic.target. Feb 12 21:53:56.762241 systemd[1]: Stopped target ignition-complete.target. Feb 12 21:53:56.764222 systemd[1]: Stopped target ignition-diskful.target. Feb 12 21:53:56.766881 systemd[1]: Stopped target initrd-root-device.target. Feb 12 21:53:56.769358 systemd[1]: Stopped target remote-fs.target. Feb 12 21:53:56.771253 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 21:53:56.773324 systemd[1]: Stopped target sysinit.target. Feb 12 21:53:56.775184 systemd[1]: Stopped target local-fs.target. Feb 12 21:53:56.776984 systemd[1]: Stopped target local-fs-pre.target. Feb 12 21:53:56.779110 systemd[1]: Stopped target swap.target. Feb 12 21:53:56.781376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 21:53:56.783309 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 21:53:56.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.785906 systemd[1]: Stopped target cryptsetup.target. Feb 12 21:53:56.788040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 21:53:56.789549 systemd[1]: Stopped dracut-initqueue.service. Feb 12 21:53:56.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.791855 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 21:53:56.793381 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 21:53:56.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.795968 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 21:53:56.797253 systemd[1]: Stopped ignition-files.service. Feb 12 21:53:56.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.800743 systemd[1]: Stopping ignition-mount.service... Feb 12 21:53:56.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.811455 iscsid[1113]: iscsid shutting down. Feb 12 21:53:56.803317 systemd[1]: Stopping iscsid.service... Feb 12 21:53:56.814986 ignition[1302]: INFO : Ignition 2.14.0 Feb 12 21:53:56.814986 ignition[1302]: INFO : Stage: umount Feb 12 21:53:56.814986 ignition[1302]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 21:53:56.814986 ignition[1302]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 21:53:56.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.804410 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 21:53:56.804629 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 21:53:56.807340 systemd[1]: Stopping sysroot-boot.service... Feb 12 21:53:56.808384 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 21:53:56.808705 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 21:53:56.810188 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 21:53:56.830431 ignition[1302]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 21:53:56.830431 ignition[1302]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 21:53:56.811423 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 21:53:56.835390 ignition[1302]: INFO : PUT result: OK Feb 12 21:53:56.829381 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 21:53:56.829485 systemd[1]: Stopped iscsid.service. Feb 12 21:53:56.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.840848 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 21:53:56.843872 ignition[1302]: INFO : umount: umount passed Feb 12 21:53:56.843872 ignition[1302]: INFO : Ignition finished successfully Feb 12 21:53:56.843910 systemd[1]: Finished initrd-cleanup.service. Feb 12 21:53:56.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.848958 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 21:53:56.850020 systemd[1]: Stopped ignition-mount.service. Feb 12 21:53:56.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.852513 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 21:53:56.852564 systemd[1]: Stopped ignition-disks.service. Feb 12 21:53:56.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.855422 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 21:53:56.855466 systemd[1]: Stopped ignition-kargs.service. Feb 12 21:53:56.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.857998 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 21:53:56.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.858040 systemd[1]: Stopped ignition-fetch.service. Feb 12 21:53:56.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.861682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 21:53:56.861726 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 21:53:56.863557 systemd[1]: Stopped target paths.target. Feb 12 21:53:56.865621 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 21:53:56.869471 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 21:53:56.871324 systemd[1]: Stopped target slices.target. Feb 12 21:53:56.872962 systemd[1]: Stopped target sockets.target. Feb 12 21:53:56.874629 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 21:53:56.874679 systemd[1]: Closed iscsid.socket. Feb 12 21:53:56.876798 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 21:53:56.876841 systemd[1]: Stopped ignition-setup.service. Feb 12 21:53:56.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.880314 systemd[1]: Stopping iscsiuio.service... Feb 12 21:53:56.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.881396 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 21:53:56.881480 systemd[1]: Stopped sysroot-boot.service. Feb 12 21:53:56.882642 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 21:53:56.882700 systemd[1]: Stopped initrd-setup-root.service. Feb 12 21:53:56.891209 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 21:53:56.891316 systemd[1]: Stopped iscsiuio.service. Feb 12 21:53:56.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.894958 systemd[1]: Stopped target network.target. Feb 12 21:53:56.896823 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 21:53:56.896863 systemd[1]: Closed iscsiuio.socket. Feb 12 21:53:56.899812 systemd[1]: Stopping systemd-networkd.service... Feb 12 21:53:56.901658 systemd[1]: Stopping systemd-resolved.service... Feb 12 21:53:56.904360 systemd-networkd[1106]: eth0: DHCPv6 lease lost Feb 12 21:53:56.906119 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 21:53:56.906218 systemd[1]: Stopped systemd-networkd.service. Feb 12 21:53:56.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.910465 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 21:53:56.911644 systemd[1]: Stopped systemd-resolved.service. Feb 12 21:53:56.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.914000 audit: BPF prog-id=9 op=UNLOAD Feb 12 21:53:56.914792 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 21:53:56.914835 systemd[1]: Closed systemd-networkd.socket. Feb 12 21:53:56.917000 audit: BPF prog-id=6 op=UNLOAD Feb 12 21:53:56.918701 systemd[1]: Stopping network-cleanup.service... Feb 12 21:53:56.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.919599 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 21:53:56.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.919695 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 21:53:56.921690 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:53:56.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.921729 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:53:56.924019 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 21:53:56.924059 systemd[1]: Stopped systemd-modules-load.service. Feb 12 21:53:56.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.928480 systemd[1]: Stopping systemd-udevd.service... Feb 12 21:53:56.934244 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 21:53:56.934440 systemd[1]: Stopped systemd-udevd.service. Feb 12 21:53:56.937720 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 21:53:56.937773 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 21:53:56.940618 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 21:53:56.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.940661 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 21:53:56.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.943844 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 21:53:56.943892 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 21:53:56.946693 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 21:53:56.946735 systemd[1]: Stopped dracut-cmdline.service. Feb 12 21:53:56.948869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 21:53:56.948906 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 21:53:56.951822 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 21:53:56.960473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 21:53:56.960542 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 21:53:56.963145 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 21:53:56.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.963251 systemd[1]: Stopped network-cleanup.service. Feb 12 21:53:56.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:53:56.964662 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 21:53:56.964764 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 21:53:56.966499 systemd[1]: Reached target initrd-switch-root.target. Feb 12 21:53:56.968570 systemd[1]: Starting initrd-switch-root.service... Feb 12 21:53:56.977870 systemd[1]: Switching root. Feb 12 21:53:56.981000 audit: BPF prog-id=5 op=UNLOAD Feb 12 21:53:56.981000 audit: BPF prog-id=4 op=UNLOAD Feb 12 21:53:56.981000 audit: BPF prog-id=3 op=UNLOAD Feb 12 21:53:56.982000 audit: BPF prog-id=8 op=UNLOAD Feb 12 21:53:56.982000 audit: BPF prog-id=7 op=UNLOAD Feb 12 21:53:56.996502 systemd-journald[185]: Journal stopped Feb 12 21:54:02.661769 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Feb 12 21:54:02.661862 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 21:54:02.661895 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 21:54:02.661920 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 21:54:02.661943 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 21:54:02.661960 kernel: SELinux: policy capability open_perms=1 Feb 12 21:54:02.661978 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 21:54:02.662002 kernel: SELinux: policy capability always_check_network=0 Feb 12 21:54:02.662022 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 21:54:02.662055 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 21:54:02.662078 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 21:54:02.662211 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 21:54:02.662232 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 12 21:54:02.662252 kernel: audit: type=1403 audit(1707774837.850:85): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 21:54:02.662778 systemd[1]: Successfully loaded SELinux policy in 96.395ms. Feb 12 21:54:02.662829 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.384ms. Feb 12 21:54:02.662855 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 21:54:02.662877 systemd[1]: Detected virtualization amazon. Feb 12 21:54:02.662897 systemd[1]: Detected architecture x86-64. Feb 12 21:54:02.662918 systemd[1]: Detected first boot. Feb 12 21:54:02.662937 systemd[1]: Initializing machine ID from VM UUID. Feb 12 21:54:02.662959 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 21:54:02.662980 kernel: audit: type=1400 audit(1707774838.253:86): avc: denied { associate } for pid=1352 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 21:54:02.663001 kernel: audit: type=1300 audit(1707774838.253:86): arch=c000003e syscall=188 success=yes exit=0 a0=c00015767c a1=c0000daae0 a2=c0000e3400 a3=32 items=0 ppid=1335 pid=1352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:54:02.663024 kernel: audit: type=1327 audit(1707774838.253:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:54:02.663050 kernel: audit: type=1400 audit(1707774838.256:87): avc: denied { associate } for pid=1352 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 21:54:02.663069 kernel: audit: type=1300 audit(1707774838.256:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000157755 a2=1ed a3=0 items=2 ppid=1335 pid=1352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:54:02.663085 kernel: audit: type=1307 audit(1707774838.256:87): cwd="/" Feb 12 21:54:02.663102 kernel: audit: type=1302 audit(1707774838.256:87): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:02.663122 kernel: audit: type=1302 audit(1707774838.256:87): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:02.663140 kernel: audit: type=1327 audit(1707774838.256:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 21:54:02.663160 systemd[1]: Populated /etc with preset unit settings. Feb 12 21:54:02.663179 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:02.663200 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:02.663323 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:02.663349 systemd[1]: Queued start job for default target multi-user.target. Feb 12 21:54:02.663366 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 21:54:02.663383 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 21:54:02.663402 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 21:54:02.663493 systemd[1]: Created slice system-getty.slice. Feb 12 21:54:02.663526 systemd[1]: Created slice system-modprobe.slice. Feb 12 21:54:02.663545 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 21:54:02.663563 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 21:54:02.663581 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 21:54:02.663599 systemd[1]: Created slice user.slice. Feb 12 21:54:02.663617 systemd[1]: Started systemd-ask-password-console.path. Feb 12 21:54:02.663635 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 21:54:02.663656 systemd[1]: Set up automount boot.automount. Feb 12 21:54:02.663674 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 21:54:02.663692 systemd[1]: Reached target integritysetup.target. Feb 12 21:54:02.663713 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 21:54:02.663734 systemd[1]: Reached target remote-fs.target. Feb 12 21:54:02.663752 systemd[1]: Reached target slices.target. Feb 12 21:54:02.663773 systemd[1]: Reached target swap.target. Feb 12 21:54:02.663793 systemd[1]: Reached target torcx.target. Feb 12 21:54:02.663812 systemd[1]: Reached target veritysetup.target. Feb 12 21:54:02.663835 systemd[1]: Listening on systemd-coredump.socket. Feb 12 21:54:02.663857 systemd[1]: Listening on systemd-initctl.socket. Feb 12 21:54:02.663874 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 21:54:02.663892 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 21:54:02.663909 systemd[1]: Listening on systemd-journald.socket. Feb 12 21:54:02.663985 systemd[1]: Listening on systemd-networkd.socket. Feb 12 21:54:02.664004 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 21:54:02.664022 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 21:54:02.664041 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 21:54:02.664060 systemd[1]: Mounting dev-hugepages.mount... Feb 12 21:54:02.664082 systemd[1]: Mounting dev-mqueue.mount... Feb 12 21:54:02.664101 systemd[1]: Mounting media.mount... Feb 12 21:54:02.664120 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:54:02.664140 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 21:54:02.664157 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 21:54:02.664176 systemd[1]: Mounting tmp.mount... Feb 12 21:54:02.664197 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 21:54:02.664219 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 21:54:02.664237 systemd[1]: Starting kmod-static-nodes.service... Feb 12 21:54:02.664258 systemd[1]: Starting modprobe@configfs.service... Feb 12 21:54:02.683274 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 21:54:02.683303 systemd[1]: Starting modprobe@drm.service... Feb 12 21:54:02.683323 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 21:54:02.683342 systemd[1]: Starting modprobe@fuse.service... Feb 12 21:54:02.683363 systemd[1]: Starting modprobe@loop.service... Feb 12 21:54:02.683385 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 21:54:02.683411 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 21:54:02.683434 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 21:54:02.683453 systemd[1]: Starting systemd-journald.service... Feb 12 21:54:02.683472 systemd[1]: Starting systemd-modules-load.service... Feb 12 21:54:02.683491 systemd[1]: Starting systemd-network-generator.service... Feb 12 21:54:02.683510 systemd[1]: Starting systemd-remount-fs.service... Feb 12 21:54:02.683530 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 21:54:02.683551 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 21:54:02.683570 systemd[1]: Mounted dev-hugepages.mount. Feb 12 21:54:02.683589 systemd[1]: Mounted dev-mqueue.mount. Feb 12 21:54:02.683607 systemd[1]: Mounted media.mount. Feb 12 21:54:02.683628 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 21:54:02.683646 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 21:54:02.683666 systemd[1]: Mounted tmp.mount. Feb 12 21:54:02.683683 systemd[1]: Finished kmod-static-nodes.service. Feb 12 21:54:02.683703 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 21:54:02.683721 systemd[1]: Finished modprobe@configfs.service. Feb 12 21:54:02.683741 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 21:54:02.683761 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 21:54:02.683780 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 21:54:02.683803 systemd[1]: Finished modprobe@drm.service. Feb 12 21:54:02.683822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 21:54:02.683840 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 21:54:02.683858 systemd[1]: Finished systemd-modules-load.service. Feb 12 21:54:02.683878 systemd[1]: Finished systemd-network-generator.service. Feb 12 21:54:02.683897 systemd[1]: Finished systemd-remount-fs.service. Feb 12 21:54:02.683916 systemd[1]: Reached target network-pre.target. Feb 12 21:54:02.683943 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 21:54:02.683962 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 21:54:02.683984 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 21:54:02.684004 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 21:54:02.684029 systemd-journald[1448]: Journal started Feb 12 21:54:02.684115 systemd-journald[1448]: Runtime Journal (/run/log/journal/ec28fe2624bd710d9ca98f37fa2abde7) is 4.8M, max 38.7M, 33.9M free. Feb 12 21:54:02.346000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 21:54:02.346000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 21:54:02.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.658000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 21:54:02.658000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc40ff0d80 a2=4000 a3=7ffc40ff0e1c items=0 ppid=1 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:54:02.658000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 21:54:02.724871 systemd[1]: Starting systemd-random-seed.service... Feb 12 21:54:02.724959 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:54:02.724991 systemd[1]: Started systemd-journald.service. Feb 12 21:54:02.725020 kernel: loop: module loaded Feb 12 21:54:02.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.706088 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 21:54:02.708488 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 21:54:02.708729 systemd[1]: Finished modprobe@loop.service. Feb 12 21:54:02.741421 systemd-journald[1448]: Time spent on flushing to /var/log/journal/ec28fe2624bd710d9ca98f37fa2abde7 is 110.249ms for 1157 entries. Feb 12 21:54:02.741421 systemd-journald[1448]: System Journal (/var/log/journal/ec28fe2624bd710d9ca98f37fa2abde7) is 8.0M, max 195.6M, 187.6M free. Feb 12 21:54:02.883724 systemd-journald[1448]: Received client request to flush runtime journal. Feb 12 21:54:02.883789 kernel: fuse: init (API version 7.34) Feb 12 21:54:02.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.712518 systemd[1]: Starting systemd-journal-flush.service... Feb 12 21:54:02.714228 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 21:54:02.720888 systemd[1]: Finished systemd-random-seed.service. Feb 12 21:54:02.722487 systemd[1]: Reached target first-boot-complete.target. Feb 12 21:54:02.762594 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:54:02.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.777758 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 21:54:02.778154 systemd[1]: Finished modprobe@fuse.service. Feb 12 21:54:02.793810 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 21:54:02.820389 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 21:54:02.885659 systemd[1]: Finished systemd-journal-flush.service. Feb 12 21:54:02.897462 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 12 21:54:02.897675 kernel: audit: type=1130 audit(1707774842.886:110): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.909872 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 21:54:02.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.913154 systemd[1]: Starting systemd-udev-settle.service... Feb 12 21:54:02.918178 kernel: audit: type=1130 audit(1707774842.910:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.942636 kernel: audit: type=1130 audit(1707774842.934:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:02.933550 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 21:54:02.943165 udevadm[1497]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 21:54:02.936827 systemd[1]: Starting systemd-sysusers.service... Feb 12 21:54:03.062151 systemd[1]: Finished systemd-sysusers.service. Feb 12 21:54:03.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.066250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 21:54:03.072707 kernel: audit: type=1130 audit(1707774843.063:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.166733 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 21:54:03.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.175295 kernel: audit: type=1130 audit(1707774843.168:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.653936 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 21:54:03.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.656603 systemd[1]: Starting systemd-udevd.service... Feb 12 21:54:03.660792 kernel: audit: type=1130 audit(1707774843.654:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.685910 systemd-udevd[1505]: Using default interface naming scheme 'v252'. Feb 12 21:54:03.757159 systemd[1]: Started systemd-udevd.service. Feb 12 21:54:03.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.764872 systemd[1]: Starting systemd-networkd.service... Feb 12 21:54:03.770259 kernel: audit: type=1130 audit(1707774843.760:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.798473 systemd[1]: Starting systemd-userdbd.service... Feb 12 21:54:03.869159 (udev-worker)[1510]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:54:03.870364 systemd[1]: Started systemd-userdbd.service. Feb 12 21:54:03.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.878351 kernel: audit: type=1130 audit(1707774843.871:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:03.893781 systemd[1]: Found device dev-ttyS0.device. Feb 12 21:54:04.016340 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 12 21:54:04.033120 systemd-networkd[1507]: lo: Link UP Feb 12 21:54:04.033130 systemd-networkd[1507]: lo: Gained carrier Feb 12 21:54:04.048859 kernel: audit: type=1130 audit(1707774844.034:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.048944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 21:54:04.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.033707 systemd-networkd[1507]: Enumeration completed Feb 12 21:54:04.033854 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 21:54:04.033857 systemd[1]: Started systemd-networkd.service. Feb 12 21:54:04.043457 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 21:54:04.047875 systemd-networkd[1507]: eth0: Link UP Feb 12 21:54:04.048060 systemd-networkd[1507]: eth0: Gained carrier Feb 12 21:54:04.059317 kernel: ACPI: button: Power Button [PWRF] Feb 12 21:54:04.062581 systemd-networkd[1507]: eth0: DHCPv4 address 172.31.30.174/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 21:54:04.063329 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 12 21:54:04.067280 kernel: ACPI: button: Sleep Button [SLPF] Feb 12 21:54:04.085000 audit[1529]: AVC avc: denied { confidentiality } for pid=1529 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:54:04.106291 kernel: audit: type=1400 audit(1707774844.085:119): avc: denied { confidentiality } for pid=1529 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 21:54:04.085000 audit[1529]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c410124210 a1=32194 a2=7fbb03ebabc5 a3=5 items=108 ppid=1505 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:54:04.085000 audit: CWD cwd="/" Feb 12 21:54:04.085000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=1 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=2 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=3 name=(null) inode=14961 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=4 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=5 name=(null) inode=14962 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=6 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=7 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=8 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=9 name=(null) inode=14964 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=10 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=11 name=(null) inode=14965 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=12 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=13 name=(null) inode=14966 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=14 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=15 name=(null) inode=14967 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=16 name=(null) inode=14963 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=17 name=(null) inode=14968 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=18 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=19 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=20 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=21 name=(null) inode=14970 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=22 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=23 name=(null) inode=14971 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=24 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=25 name=(null) inode=14972 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=26 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=27 name=(null) inode=14973 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=28 name=(null) inode=14969 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=29 name=(null) inode=14974 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=30 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=31 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=32 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=33 name=(null) inode=14976 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=34 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=35 name=(null) inode=14977 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=36 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=37 name=(null) inode=14978 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=38 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=39 name=(null) inode=14979 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=40 name=(null) inode=14975 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=41 name=(null) inode=14980 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=42 name=(null) inode=14960 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=43 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=44 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=45 name=(null) inode=14982 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=46 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=47 name=(null) inode=14983 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=48 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=49 name=(null) inode=14984 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=50 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=51 name=(null) inode=14985 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=52 name=(null) inode=14981 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=53 name=(null) inode=14986 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=55 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=56 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=57 name=(null) inode=14988 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=58 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=59 name=(null) inode=14989 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=60 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=61 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=62 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=63 name=(null) inode=14991 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=64 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=65 name=(null) inode=14992 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=66 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=67 name=(null) inode=14993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=68 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=69 name=(null) inode=14994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=70 name=(null) inode=14990 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=71 name=(null) inode=14995 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=72 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=73 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=74 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=75 name=(null) inode=14997 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=76 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=77 name=(null) inode=14998 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=78 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=79 name=(null) inode=14999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=80 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=81 name=(null) inode=15000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=82 name=(null) inode=14996 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=83 name=(null) inode=15001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=84 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=85 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=86 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=87 name=(null) inode=15003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=88 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=89 name=(null) inode=15004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=90 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=91 name=(null) inode=15005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=92 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=93 name=(null) inode=15006 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=94 name=(null) inode=15002 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=95 name=(null) inode=15007 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=96 name=(null) inode=14987 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=97 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=98 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=99 name=(null) inode=15009 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=100 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=101 name=(null) inode=15010 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=102 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=103 name=(null) inode=15011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=104 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=105 name=(null) inode=15012 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=106 name=(null) inode=15008 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PATH item=107 name=(null) inode=15013 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 21:54:04.085000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 21:54:04.127308 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Feb 12 21:54:04.139289 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Feb 12 21:54:04.149285 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 21:54:04.205844 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1513) Feb 12 21:54:04.365580 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 21:54:04.463901 systemd[1]: Finished systemd-udev-settle.service. Feb 12 21:54:04.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.467655 systemd[1]: Starting lvm2-activation-early.service... Feb 12 21:54:04.502945 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:54:04.540057 systemd[1]: Finished lvm2-activation-early.service. Feb 12 21:54:04.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.541626 systemd[1]: Reached target cryptsetup.target. Feb 12 21:54:04.545304 systemd[1]: Starting lvm2-activation.service... Feb 12 21:54:04.552436 lvm[1623]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 21:54:04.580781 systemd[1]: Finished lvm2-activation.service. Feb 12 21:54:04.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.581975 systemd[1]: Reached target local-fs-pre.target. Feb 12 21:54:04.583179 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 21:54:04.583225 systemd[1]: Reached target local-fs.target. Feb 12 21:54:04.584373 systemd[1]: Reached target machines.target. Feb 12 21:54:04.587506 systemd[1]: Starting ldconfig.service... Feb 12 21:54:04.591616 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 21:54:04.591823 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:54:04.594020 systemd[1]: Starting systemd-boot-update.service... Feb 12 21:54:04.597044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 21:54:04.601065 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 21:54:04.602951 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:54:04.603027 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 21:54:04.604696 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 21:54:04.637135 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1626 (bootctl) Feb 12 21:54:04.639134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 21:54:04.655803 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 21:54:04.657040 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 21:54:04.659752 systemd-tmpfiles[1629]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 21:54:04.665351 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 21:54:04.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.830126 systemd-fsck[1635]: fsck.fat 4.2 (2021-01-31) Feb 12 21:54:04.830126 systemd-fsck[1635]: /dev/nvme0n1p1: 789 files, 115339/258078 clusters Feb 12 21:54:04.833604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 21:54:04.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:04.838572 systemd[1]: Mounting boot.mount... Feb 12 21:54:04.875747 systemd[1]: Mounted boot.mount. Feb 12 21:54:04.931585 systemd[1]: Finished systemd-boot-update.service. Feb 12 21:54:04.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.067380 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 21:54:05.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.070889 systemd[1]: Starting audit-rules.service... Feb 12 21:54:05.074982 systemd[1]: Starting clean-ca-certificates.service... Feb 12 21:54:05.079208 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 21:54:05.084967 systemd[1]: Starting systemd-resolved.service... Feb 12 21:54:05.094179 systemd[1]: Starting systemd-timesyncd.service... Feb 12 21:54:05.105650 systemd[1]: Starting systemd-update-utmp.service... Feb 12 21:54:05.108378 systemd[1]: Finished clean-ca-certificates.service. Feb 12 21:54:05.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.117656 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 21:54:05.127000 audit[1659]: SYSTEM_BOOT pid=1659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.130316 systemd[1]: Finished systemd-update-utmp.service. Feb 12 21:54:05.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.158897 systemd-networkd[1507]: eth0: Gained IPv6LL Feb 12 21:54:05.171484 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 21:54:05.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.243795 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 21:54:05.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 21:54:05.278000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 21:54:05.278000 audit[1678]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec52f1e20 a2=420 a3=0 items=0 ppid=1653 pid=1678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 21:54:05.278000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 21:54:05.279437 augenrules[1678]: No rules Feb 12 21:54:05.280171 systemd[1]: Finished audit-rules.service. Feb 12 21:54:05.349649 systemd[1]: Started systemd-timesyncd.service. Feb 12 21:54:05.351040 systemd[1]: Reached target time-set.target. Feb 12 21:54:05.352903 systemd-resolved[1657]: Positive Trust Anchors: Feb 12 21:54:05.353203 systemd-resolved[1657]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 21:54:05.353307 systemd-resolved[1657]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 21:54:05.392031 systemd-resolved[1657]: Defaulting to hostname 'linux'. Feb 12 21:54:05.394609 systemd[1]: Started systemd-resolved.service. Feb 12 21:54:05.395956 systemd[1]: Reached target network.target. Feb 12 21:54:05.397306 systemd[1]: Reached target network-online.target. Feb 12 21:54:05.398544 systemd[1]: Reached target nss-lookup.target. Feb 12 21:54:05.453637 systemd-timesyncd[1658]: Contacted time server 108.181.220.94:123 (0.flatcar.pool.ntp.org). Feb 12 21:54:05.453730 systemd-timesyncd[1658]: Initial clock synchronization to Mon 2024-02-12 21:54:05.363221 UTC. Feb 12 21:54:05.657638 ldconfig[1625]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 21:54:05.714349 systemd[1]: Finished ldconfig.service. Feb 12 21:54:05.719302 systemd[1]: Starting systemd-update-done.service... Feb 12 21:54:05.736023 systemd[1]: Finished systemd-update-done.service. Feb 12 21:54:05.737319 systemd[1]: Reached target sysinit.target. Feb 12 21:54:05.738431 systemd[1]: Started motdgen.path. Feb 12 21:54:05.739377 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 21:54:05.740928 systemd[1]: Started logrotate.timer. Feb 12 21:54:05.742206 systemd[1]: Started mdadm.timer. Feb 12 21:54:05.743721 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 21:54:05.745149 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 21:54:05.745179 systemd[1]: Reached target paths.target. Feb 12 21:54:05.746000 systemd[1]: Reached target timers.target. Feb 12 21:54:05.747289 systemd[1]: Listening on dbus.socket. Feb 12 21:54:05.750181 systemd[1]: Starting docker.socket... Feb 12 21:54:05.755856 systemd[1]: Listening on sshd.socket. Feb 12 21:54:05.759559 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:54:05.760361 systemd[1]: Listening on docker.socket. Feb 12 21:54:05.761985 systemd[1]: Reached target sockets.target. Feb 12 21:54:05.763201 systemd[1]: Reached target basic.target. Feb 12 21:54:05.775644 systemd[1]: System is tainted: cgroupsv1 Feb 12 21:54:05.775732 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:54:05.775769 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 21:54:05.786157 systemd[1]: Started amazon-ssm-agent.service. Feb 12 21:54:05.793371 systemd[1]: Starting containerd.service... Feb 12 21:54:05.802173 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 21:54:05.807764 systemd[1]: Starting dbus.service... Feb 12 21:54:05.812012 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 21:54:05.822705 systemd[1]: Starting extend-filesystems.service... Feb 12 21:54:05.826091 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 21:54:05.828137 systemd[1]: Starting motdgen.service... Feb 12 21:54:05.835677 systemd[1]: Started nvidia.service. Feb 12 21:54:05.840538 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 21:54:05.847435 systemd[1]: Starting prepare-critools.service... Feb 12 21:54:05.851122 systemd[1]: Starting prepare-helm.service... Feb 12 21:54:05.855560 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 21:54:05.858419 systemd[1]: Starting sshd-keygen.service... Feb 12 21:54:05.888551 systemd[1]: Starting systemd-logind.service... Feb 12 21:54:05.890198 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 21:54:05.890296 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 21:54:05.892767 systemd[1]: Starting update-engine.service... Feb 12 21:54:05.896115 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 21:54:06.001940 jq[1695]: false Feb 12 21:54:06.002092 jq[1712]: true Feb 12 21:54:06.002332 tar[1715]: ./ Feb 12 21:54:06.002332 tar[1715]: ./macvlan Feb 12 21:54:06.002624 tar[1717]: linux-amd64/helm Feb 12 21:54:05.917017 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 21:54:05.917441 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 21:54:06.075679 tar[1716]: crictl Feb 12 21:54:05.936821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 21:54:05.938411 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 21:54:05.951193 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 21:54:05.951531 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 21:54:06.110445 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 21:54:06.110879 systemd[1]: Finished motdgen.service. Feb 12 21:54:06.120389 jq[1737]: true Feb 12 21:54:06.123337 extend-filesystems[1697]: Found nvme0n1 Feb 12 21:54:06.129349 extend-filesystems[1697]: Found nvme0n1p1 Feb 12 21:54:06.130361 extend-filesystems[1697]: Found nvme0n1p2 Feb 12 21:54:06.130361 extend-filesystems[1697]: Found nvme0n1p3 Feb 12 21:54:06.133134 extend-filesystems[1697]: Found usr Feb 12 21:54:06.134036 extend-filesystems[1697]: Found nvme0n1p4 Feb 12 21:54:06.134036 extend-filesystems[1697]: Found nvme0n1p6 Feb 12 21:54:06.134036 extend-filesystems[1697]: Found nvme0n1p7 Feb 12 21:54:06.134036 extend-filesystems[1697]: Found nvme0n1p9 Feb 12 21:54:06.192547 systemd[1]: Started dbus.service. Feb 12 21:54:06.213282 extend-filesystems[1697]: Checking size of /dev/nvme0n1p9 Feb 12 21:54:06.213282 extend-filesystems[1697]: Resized partition /dev/nvme0n1p9 Feb 12 21:54:06.191935 dbus-daemon[1694]: [system] SELinux support is enabled Feb 12 21:54:06.216492 extend-filesystems[1754]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 21:54:06.200887 dbus-daemon[1694]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1507 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 21:54:06.220045 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 21:54:06.220091 systemd[1]: Reached target system-config.target. Feb 12 21:54:06.221414 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 21:54:06.221448 systemd[1]: Reached target user-config.target. Feb 12 21:54:06.225042 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 12 21:54:06.230099 systemd[1]: Starting systemd-hostnamed.service... Feb 12 21:54:06.241285 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 21:54:06.298112 amazon-ssm-agent[1690]: 2024/02/12 21:54:06 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 21:54:06.301847 amazon-ssm-agent[1690]: Initializing new seelog logger Feb 12 21:54:06.317741 amazon-ssm-agent[1690]: New Seelog Logger Creation Complete Feb 12 21:54:06.320378 amazon-ssm-agent[1690]: 2024/02/12 21:54:06 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:54:06.320516 amazon-ssm-agent[1690]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 21:54:06.320815 amazon-ssm-agent[1690]: 2024/02/12 21:54:06 processing appconfig overrides Feb 12 21:54:06.343896 update_engine[1710]: I0212 21:54:06.342010 1710 main.cc:92] Flatcar Update Engine starting Feb 12 21:54:06.359873 systemd[1]: Started update-engine.service. Feb 12 21:54:06.360356 update_engine[1710]: I0212 21:54:06.359943 1710 update_check_scheduler.cc:74] Next update check in 3m16s Feb 12 21:54:06.364058 systemd[1]: Started locksmithd.service. Feb 12 21:54:06.416305 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 21:54:06.471880 extend-filesystems[1754]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 21:54:06.471880 extend-filesystems[1754]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 21:54:06.471880 extend-filesystems[1754]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 21:54:06.471393 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 21:54:06.482758 bash[1776]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:54:06.482880 extend-filesystems[1697]: Resized filesystem in /dev/nvme0n1p9 Feb 12 21:54:06.471715 systemd[1]: Finished extend-filesystems.service. Feb 12 21:54:06.480989 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 21:54:06.514728 env[1721]: time="2024-02-12T21:54:06.514663147Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 21:54:06.611198 systemd-logind[1709]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 21:54:06.611231 systemd-logind[1709]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 12 21:54:06.611281 systemd-logind[1709]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 21:54:06.616541 systemd-logind[1709]: New seat seat0. Feb 12 21:54:06.626936 systemd[1]: Started systemd-logind.service. Feb 12 21:54:06.710961 tar[1715]: ./static Feb 12 21:54:06.730973 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 21:54:06.795002 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 21:54:06.795187 systemd[1]: Started systemd-hostnamed.service. Feb 12 21:54:06.797885 dbus-daemon[1694]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1760 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 21:54:06.801785 systemd[1]: Starting polkit.service... Feb 12 21:54:06.830343 polkitd[1825]: Started polkitd version 121 Feb 12 21:54:06.830986 env[1721]: time="2024-02-12T21:54:06.830942649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 21:54:06.831145 env[1721]: time="2024-02-12T21:54:06.831122633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856277 env[1721]: time="2024-02-12T21:54:06.856196292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856277 env[1721]: time="2024-02-12T21:54:06.856259317Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856763 env[1721]: time="2024-02-12T21:54:06.856730457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856833 env[1721]: time="2024-02-12T21:54:06.856780970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856833 env[1721]: time="2024-02-12T21:54:06.856801990Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 21:54:06.856833 env[1721]: time="2024-02-12T21:54:06.856817416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.856972 env[1721]: time="2024-02-12T21:54:06.856954549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.857351 env[1721]: time="2024-02-12T21:54:06.857323928Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 21:54:06.857714 env[1721]: time="2024-02-12T21:54:06.857684093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 21:54:06.857783 env[1721]: time="2024-02-12T21:54:06.857728606Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 21:54:06.857838 env[1721]: time="2024-02-12T21:54:06.857815118Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 21:54:06.857892 env[1721]: time="2024-02-12T21:54:06.857840706Z" level=info msg="metadata content store policy set" policy=shared Feb 12 21:54:06.861831 polkitd[1825]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 21:54:06.875389 env[1721]: time="2024-02-12T21:54:06.875326817Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 21:54:06.875389 env[1721]: time="2024-02-12T21:54:06.875395955Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 21:54:06.875589 env[1721]: time="2024-02-12T21:54:06.875415899Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 21:54:06.875589 env[1721]: time="2024-02-12T21:54:06.875500961Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875589 env[1721]: time="2024-02-12T21:54:06.875571929Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875709 env[1721]: time="2024-02-12T21:54:06.875608055Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875709 env[1721]: time="2024-02-12T21:54:06.875629579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875709 env[1721]: time="2024-02-12T21:54:06.875650343Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875709 env[1721]: time="2024-02-12T21:54:06.875685562Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875847 env[1721]: time="2024-02-12T21:54:06.875706128Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875847 env[1721]: time="2024-02-12T21:54:06.875725654Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.875847 env[1721]: time="2024-02-12T21:54:06.875761034Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 21:54:06.875966 env[1721]: time="2024-02-12T21:54:06.875940553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 21:54:06.876101 env[1721]: time="2024-02-12T21:54:06.876083595Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 21:54:06.876813 env[1721]: time="2024-02-12T21:54:06.876786470Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 21:54:06.876896 env[1721]: time="2024-02-12T21:54:06.876830466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.876896 env[1721]: time="2024-02-12T21:54:06.876866571Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 21:54:06.876982 env[1721]: time="2024-02-12T21:54:06.876950719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.876982 env[1721]: time="2024-02-12T21:54:06.876972078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877074 env[1721]: time="2024-02-12T21:54:06.877056867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877117 env[1721]: time="2024-02-12T21:54:06.877083070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877117 env[1721]: time="2024-02-12T21:54:06.877107238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877193 env[1721]: time="2024-02-12T21:54:06.877143998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877193 env[1721]: time="2024-02-12T21:54:06.877163516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877193 env[1721]: time="2024-02-12T21:54:06.877182168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877345 env[1721]: time="2024-02-12T21:54:06.877216342Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 21:54:06.877452 env[1721]: time="2024-02-12T21:54:06.877419408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877508 env[1721]: time="2024-02-12T21:54:06.877461108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877508 env[1721]: time="2024-02-12T21:54:06.877481140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.877587 env[1721]: time="2024-02-12T21:54:06.877499422Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 21:54:06.877587 env[1721]: time="2024-02-12T21:54:06.877539236Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 21:54:06.877587 env[1721]: time="2024-02-12T21:54:06.877557457Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 21:54:06.877699 env[1721]: time="2024-02-12T21:54:06.877596595Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 21:54:06.877699 env[1721]: time="2024-02-12T21:54:06.877641606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 21:54:06.878086 env[1721]: time="2024-02-12T21:54:06.877999695Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 21:54:06.880949 env[1721]: time="2024-02-12T21:54:06.878102021Z" level=info msg="Connect containerd service" Feb 12 21:54:06.879547 polkitd[1825]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 21:54:06.882216 polkitd[1825]: Finished loading, compiling and executing 2 rules Feb 12 21:54:06.882835 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 21:54:06.883043 systemd[1]: Started polkit.service. Feb 12 21:54:06.883373 polkitd[1825]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 21:54:06.897647 env[1721]: time="2024-02-12T21:54:06.897603542Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 21:54:06.898797 env[1721]: time="2024-02-12T21:54:06.898760538Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:54:06.898935 env[1721]: time="2024-02-12T21:54:06.898897841Z" level=info msg="Start subscribing containerd event" Feb 12 21:54:06.898984 env[1721]: time="2024-02-12T21:54:06.898973990Z" level=info msg="Start recovering state" Feb 12 21:54:06.899080 env[1721]: time="2024-02-12T21:54:06.899065954Z" level=info msg="Start event monitor" Feb 12 21:54:06.899127 env[1721]: time="2024-02-12T21:54:06.899090786Z" level=info msg="Start snapshots syncer" Feb 12 21:54:06.899127 env[1721]: time="2024-02-12T21:54:06.899120173Z" level=info msg="Start cni network conf syncer for default" Feb 12 21:54:06.899199 env[1721]: time="2024-02-12T21:54:06.899132102Z" level=info msg="Start streaming server" Feb 12 21:54:06.903573 systemd-hostnamed[1760]: Hostname set to (transient) Feb 12 21:54:06.903698 systemd-resolved[1657]: System hostname changed to 'ip-172-31-30-174'. Feb 12 21:54:06.918347 tar[1715]: ./vlan Feb 12 21:54:06.937054 env[1721]: time="2024-02-12T21:54:06.936994063Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 21:54:06.939542 env[1721]: time="2024-02-12T21:54:06.937085510Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 21:54:06.939542 env[1721]: time="2024-02-12T21:54:06.937179612Z" level=info msg="containerd successfully booted in 0.476987s" Feb 12 21:54:06.937338 systemd[1]: Started containerd.service. Feb 12 21:54:07.039241 coreos-metadata[1692]: Feb 12 21:54:07.036 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 21:54:07.040536 coreos-metadata[1692]: Feb 12 21:54:07.040 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 21:54:07.041143 coreos-metadata[1692]: Feb 12 21:54:07.041 INFO Fetch successful Feb 12 21:54:07.041304 coreos-metadata[1692]: Feb 12 21:54:07.041 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 21:54:07.043116 coreos-metadata[1692]: Feb 12 21:54:07.043 INFO Fetch successful Feb 12 21:54:07.046146 unknown[1692]: wrote ssh authorized keys file for user: core Feb 12 21:54:07.094512 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Create new startup processor Feb 12 21:54:07.104066 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 21:54:07.107229 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing bookkeeping folders Feb 12 21:54:07.107415 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO removing the completed state files Feb 12 21:54:07.107524 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing bookkeeping folders for long running plugins Feb 12 21:54:07.107607 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 21:54:07.107695 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing healthcheck folders for long running plugins Feb 12 21:54:07.107774 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing locations for inventory plugin Feb 12 21:54:07.108112 update-ssh-keys[1865]: Updated "/home/core/.ssh/authorized_keys" Feb 12 21:54:07.108824 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 21:54:07.109221 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing default location for custom inventory Feb 12 21:54:07.109344 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing default location for file inventory Feb 12 21:54:07.109423 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Initializing default location for role inventory Feb 12 21:54:07.109506 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Init the cloudwatchlogs publisher Feb 12 21:54:07.109583 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 21:54:07.109661 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:configurePackage Feb 12 21:54:07.109800 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 21:54:07.109876 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 21:54:07.109955 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 21:54:07.110030 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:downloadContent Feb 12 21:54:07.111462 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:runDocument Feb 12 21:54:07.111594 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 21:54:07.111692 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform independent plugin aws:configureDocker Feb 12 21:54:07.111775 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 21:54:07.111854 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 21:54:07.111931 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO OS: linux, Arch: amd64 Feb 12 21:54:07.114087 amazon-ssm-agent[1690]: datastore file /var/lib/amazon/ssm/i-0f0f369682a2d674b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 21:54:07.135055 tar[1715]: ./portmap Feb 12 21:54:07.194203 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 21:54:07.289321 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 21:54:07.297780 tar[1715]: ./host-local Feb 12 21:54:07.383559 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 21:54:07.419598 tar[1715]: ./vrf Feb 12 21:54:07.478062 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] Starting message polling Feb 12 21:54:07.535860 tar[1715]: ./bridge Feb 12 21:54:07.572807 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 21:54:07.668487 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [instanceID=i-0f0f369682a2d674b] Starting association polling Feb 12 21:54:07.670319 tar[1715]: ./tuning Feb 12 21:54:07.763554 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 21:54:07.764745 tar[1715]: ./firewall Feb 12 21:54:07.858853 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 21:54:07.869776 tar[1715]: ./host-device Feb 12 21:54:07.954271 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 21:54:07.965759 tar[1715]: ./sbr Feb 12 21:54:08.050058 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 21:54:08.053127 tar[1715]: ./loopback Feb 12 21:54:08.112206 tar[1717]: linux-amd64/LICENSE Feb 12 21:54:08.112648 tar[1717]: linux-amd64/README.md Feb 12 21:54:08.122414 systemd[1]: Finished prepare-helm.service. Feb 12 21:54:08.145372 tar[1715]: ./dhcp Feb 12 21:54:08.145943 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 21:54:08.167932 systemd[1]: Finished prepare-critools.service. Feb 12 21:54:08.242154 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 21:54:08.321879 tar[1715]: ./ptp Feb 12 21:54:08.339322 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 21:54:08.414818 tar[1715]: ./ipvlan Feb 12 21:54:08.434998 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 21:54:08.481763 tar[1715]: ./bandwidth Feb 12 21:54:08.531650 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 21:54:08.558244 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 21:54:08.628503 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0f0f369682a2d674b, requestId: 73cd1470-7ed8-40bd-bf00-cef72e6bf907 Feb 12 21:54:08.654674 locksmithd[1780]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 21:54:08.725587 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [OfflineService] Starting document processing engine... Feb 12 21:54:08.822848 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [OfflineService] [EngineProcessor] Starting Feb 12 21:54:08.920282 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 21:54:09.018353 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [OfflineService] Starting message polling Feb 12 21:54:09.116135 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [OfflineService] Starting send replies to MDS Feb 12 21:54:09.214191 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 21:54:09.312501 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 21:54:09.410916 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] listening reply. Feb 12 21:54:09.509584 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 21:54:09.608481 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [StartupProcessor] Executing startup processor tasks Feb 12 21:54:09.707456 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 21:54:09.806680 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 21:54:09.810415 sshd_keygen[1738]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 21:54:09.837401 systemd[1]: Finished sshd-keygen.service. Feb 12 21:54:09.841190 systemd[1]: Starting issuegen.service... Feb 12 21:54:09.851780 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 21:54:09.852133 systemd[1]: Finished issuegen.service. Feb 12 21:54:09.856825 systemd[1]: Starting systemd-user-sessions.service... Feb 12 21:54:09.867997 systemd[1]: Finished systemd-user-sessions.service. Feb 12 21:54:09.871972 systemd[1]: Started getty@tty1.service. Feb 12 21:54:09.875435 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 21:54:09.877065 systemd[1]: Reached target getty.target. Feb 12 21:54:09.878197 systemd[1]: Reached target multi-user.target. Feb 12 21:54:09.880884 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 21:54:09.891561 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 21:54:09.892579 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 21:54:09.903076 systemd[1]: Startup finished in 10.632s (kernel) + 12.172s (userspace) = 22.804s. Feb 12 21:54:09.906074 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 21:54:10.005685 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f0f369682a2d674b?role=subscribe&stream=input Feb 12 21:54:10.105673 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0f0f369682a2d674b?role=subscribe&stream=input Feb 12 21:54:10.205535 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 21:54:10.305668 amazon-ssm-agent[1690]: 2024-02-12 21:54:07 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 21:54:15.419126 systemd[1]: Created slice system-sshd.slice. Feb 12 21:54:15.422185 systemd[1]: Started sshd@0-172.31.30.174:22-139.178.89.65:50904.service. Feb 12 21:54:15.595319 sshd[1935]: Accepted publickey for core from 139.178.89.65 port 50904 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:54:15.597775 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:15.609865 systemd[1]: Created slice user-500.slice. Feb 12 21:54:15.611674 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 21:54:15.616065 systemd-logind[1709]: New session 1 of user core. Feb 12 21:54:15.624324 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 21:54:15.626543 systemd[1]: Starting user@500.service... Feb 12 21:54:15.633804 (systemd)[1940]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:15.720814 systemd[1940]: Queued start job for default target default.target. Feb 12 21:54:15.721123 systemd[1940]: Reached target paths.target. Feb 12 21:54:15.721148 systemd[1940]: Reached target sockets.target. Feb 12 21:54:15.721167 systemd[1940]: Reached target timers.target. Feb 12 21:54:15.721182 systemd[1940]: Reached target basic.target. Feb 12 21:54:15.721239 systemd[1940]: Reached target default.target. Feb 12 21:54:15.721313 systemd[1940]: Startup finished in 79ms. Feb 12 21:54:15.721827 systemd[1]: Started user@500.service. Feb 12 21:54:15.723193 systemd[1]: Started session-1.scope. Feb 12 21:54:15.863174 systemd[1]: Started sshd@1-172.31.30.174:22-139.178.89.65:50918.service. Feb 12 21:54:16.028167 sshd[1949]: Accepted publickey for core from 139.178.89.65 port 50918 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:54:16.029700 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:16.038387 systemd[1]: Started session-2.scope. Feb 12 21:54:16.039227 systemd-logind[1709]: New session 2 of user core. Feb 12 21:54:16.172190 sshd[1949]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:16.177207 systemd[1]: sshd@1-172.31.30.174:22-139.178.89.65:50918.service: Deactivated successfully. Feb 12 21:54:16.180223 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 21:54:16.181025 systemd-logind[1709]: Session 2 logged out. Waiting for processes to exit. Feb 12 21:54:16.185536 systemd-logind[1709]: Removed session 2. Feb 12 21:54:16.200986 systemd[1]: Started sshd@2-172.31.30.174:22-139.178.89.65:50922.service. Feb 12 21:54:16.373180 sshd[1956]: Accepted publickey for core from 139.178.89.65 port 50922 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:54:16.373936 sshd[1956]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:16.379455 systemd[1]: Started session-3.scope. Feb 12 21:54:16.380325 systemd-logind[1709]: New session 3 of user core. Feb 12 21:54:16.500678 sshd[1956]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:16.503871 systemd[1]: sshd@2-172.31.30.174:22-139.178.89.65:50922.service: Deactivated successfully. Feb 12 21:54:16.504939 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 21:54:16.506537 systemd-logind[1709]: Session 3 logged out. Waiting for processes to exit. Feb 12 21:54:16.507889 systemd-logind[1709]: Removed session 3. Feb 12 21:54:16.524952 systemd[1]: Started sshd@3-172.31.30.174:22-139.178.89.65:50924.service. Feb 12 21:54:16.684940 sshd[1963]: Accepted publickey for core from 139.178.89.65 port 50924 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:54:16.685986 sshd[1963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:16.691738 systemd[1]: Started session-4.scope. Feb 12 21:54:16.692007 systemd-logind[1709]: New session 4 of user core. Feb 12 21:54:16.821860 sshd[1963]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:16.825879 systemd[1]: sshd@3-172.31.30.174:22-139.178.89.65:50924.service: Deactivated successfully. Feb 12 21:54:16.827190 systemd-logind[1709]: Session 4 logged out. Waiting for processes to exit. Feb 12 21:54:16.827310 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 21:54:16.828794 systemd-logind[1709]: Removed session 4. Feb 12 21:54:16.846339 systemd[1]: Started sshd@4-172.31.30.174:22-139.178.89.65:50940.service. Feb 12 21:54:17.008534 sshd[1970]: Accepted publickey for core from 139.178.89.65 port 50940 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:54:17.009589 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:54:17.015118 systemd[1]: Started session-5.scope. Feb 12 21:54:17.015478 systemd-logind[1709]: New session 5 of user core. Feb 12 21:54:17.137249 sudo[1974]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 21:54:17.137707 sudo[1974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 21:54:17.964826 systemd[1]: Starting docker.service... Feb 12 21:54:18.017412 env[1989]: time="2024-02-12T21:54:18.017339940Z" level=info msg="Starting up" Feb 12 21:54:18.019142 env[1989]: time="2024-02-12T21:54:18.019104458Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 21:54:18.019142 env[1989]: time="2024-02-12T21:54:18.019127455Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 21:54:18.019582 env[1989]: time="2024-02-12T21:54:18.019151421Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 21:54:18.019582 env[1989]: time="2024-02-12T21:54:18.019165263Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 21:54:18.021510 env[1989]: time="2024-02-12T21:54:18.021481797Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 21:54:18.021510 env[1989]: time="2024-02-12T21:54:18.021502307Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 21:54:18.021630 env[1989]: time="2024-02-12T21:54:18.021523981Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 21:54:18.021630 env[1989]: time="2024-02-12T21:54:18.021536878Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 21:54:18.030020 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3742259487-merged.mount: Deactivated successfully. Feb 12 21:54:18.146828 env[1989]: time="2024-02-12T21:54:18.146787545Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 21:54:18.146828 env[1989]: time="2024-02-12T21:54:18.146814734Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 21:54:18.147355 env[1989]: time="2024-02-12T21:54:18.147062330Z" level=info msg="Loading containers: start." Feb 12 21:54:18.289290 kernel: Initializing XFRM netlink socket Feb 12 21:54:18.335589 env[1989]: time="2024-02-12T21:54:18.335547088Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 21:54:18.337088 (udev-worker)[1999]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:54:18.477838 systemd-networkd[1507]: docker0: Link UP Feb 12 21:54:18.495094 env[1989]: time="2024-02-12T21:54:18.495057890Z" level=info msg="Loading containers: done." Feb 12 21:54:18.509493 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck880340924-merged.mount: Deactivated successfully. Feb 12 21:54:18.522411 env[1989]: time="2024-02-12T21:54:18.522362462Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 21:54:18.522638 env[1989]: time="2024-02-12T21:54:18.522595373Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 21:54:18.522746 env[1989]: time="2024-02-12T21:54:18.522722272Z" level=info msg="Daemon has completed initialization" Feb 12 21:54:18.554034 systemd[1]: Started docker.service. Feb 12 21:54:18.573436 env[1989]: time="2024-02-12T21:54:18.573235410Z" level=info msg="API listen on /run/docker.sock" Feb 12 21:54:18.605024 systemd[1]: Reloading. Feb 12 21:54:18.708736 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2024-02-12T21:54:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:54:18.708948 /usr/lib/systemd/system-generators/torcx-generator[2124]: time="2024-02-12T21:54:18Z" level=info msg="torcx already run" Feb 12 21:54:18.843170 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:18.843192 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:18.867776 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:19.011067 systemd[1]: Started kubelet.service. Feb 12 21:54:19.120711 kubelet[2182]: E0212 21:54:19.120564 2182 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:54:19.128125 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:54:19.128399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:54:19.730389 env[1721]: time="2024-02-12T21:54:19.730327305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 21:54:20.420364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2994855681.mount: Deactivated successfully. Feb 12 21:54:24.085682 env[1721]: time="2024-02-12T21:54:24.085587858Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:24.122319 env[1721]: time="2024-02-12T21:54:24.122259041Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:24.144209 env[1721]: time="2024-02-12T21:54:24.144161387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:24.149615 env[1721]: time="2024-02-12T21:54:24.149573571Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:24.150746 env[1721]: time="2024-02-12T21:54:24.150671407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 12 21:54:24.166102 env[1721]: time="2024-02-12T21:54:24.166062508Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 21:54:27.090153 env[1721]: time="2024-02-12T21:54:27.090091759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:27.093723 env[1721]: time="2024-02-12T21:54:27.093678287Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:27.096653 env[1721]: time="2024-02-12T21:54:27.096610122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:27.099176 env[1721]: time="2024-02-12T21:54:27.099139895Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:27.100948 env[1721]: time="2024-02-12T21:54:27.100903132Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 12 21:54:27.114145 env[1721]: time="2024-02-12T21:54:27.114049802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 21:54:28.984390 env[1721]: time="2024-02-12T21:54:28.984336074Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:28.987755 env[1721]: time="2024-02-12T21:54:28.987709615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:28.990586 env[1721]: time="2024-02-12T21:54:28.990548117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:28.993095 env[1721]: time="2024-02-12T21:54:28.993057483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:28.993981 env[1721]: time="2024-02-12T21:54:28.993943593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 12 21:54:29.006206 env[1721]: time="2024-02-12T21:54:29.006167618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 21:54:29.379620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 21:54:29.379883 systemd[1]: Stopped kubelet.service. Feb 12 21:54:29.382108 systemd[1]: Started kubelet.service. Feb 12 21:54:29.446037 kubelet[2216]: E0212 21:54:29.445971 2216 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:54:29.449667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:54:29.449879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:54:29.756550 amazon-ssm-agent[1690]: 2024-02-12 21:54:29 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 21:54:30.474917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535107007.mount: Deactivated successfully. Feb 12 21:54:31.074194 env[1721]: time="2024-02-12T21:54:31.074141314Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.078316 env[1721]: time="2024-02-12T21:54:31.078243750Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.082640 env[1721]: time="2024-02-12T21:54:31.082594661Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.085121 env[1721]: time="2024-02-12T21:54:31.085083359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.085677 env[1721]: time="2024-02-12T21:54:31.085640678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 12 21:54:31.098004 env[1721]: time="2024-02-12T21:54:31.097962708Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 21:54:31.598163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232085530.mount: Deactivated successfully. Feb 12 21:54:31.611373 env[1721]: time="2024-02-12T21:54:31.611317929Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.615466 env[1721]: time="2024-02-12T21:54:31.615419010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.618310 env[1721]: time="2024-02-12T21:54:31.618256305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.620559 env[1721]: time="2024-02-12T21:54:31.620519473Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:31.621077 env[1721]: time="2024-02-12T21:54:31.621041706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 21:54:31.634086 env[1721]: time="2024-02-12T21:54:31.634050574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 21:54:32.562664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112202920.mount: Deactivated successfully. Feb 12 21:54:36.938485 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 21:54:38.040031 env[1721]: time="2024-02-12T21:54:38.039982974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:38.044156 env[1721]: time="2024-02-12T21:54:38.044107883Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:38.047313 env[1721]: time="2024-02-12T21:54:38.047249768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:38.050032 env[1721]: time="2024-02-12T21:54:38.049974904Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:38.050884 env[1721]: time="2024-02-12T21:54:38.050842521Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 12 21:54:38.063843 env[1721]: time="2024-02-12T21:54:38.063801705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 21:54:38.715905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1576791649.mount: Deactivated successfully. Feb 12 21:54:39.529690 env[1721]: time="2024-02-12T21:54:39.529638529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:39.533352 env[1721]: time="2024-02-12T21:54:39.533313035Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:39.537227 env[1721]: time="2024-02-12T21:54:39.537185223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:39.548167 env[1721]: time="2024-02-12T21:54:39.548097561Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 12 21:54:39.556997 env[1721]: time="2024-02-12T21:54:39.556658590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:39.605642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 21:54:39.606297 systemd[1]: Stopped kubelet.service. Feb 12 21:54:39.608153 systemd[1]: Started kubelet.service. Feb 12 21:54:39.719170 kubelet[2247]: E0212 21:54:39.719104 2247 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 21:54:39.721512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 21:54:39.721769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 21:54:42.655429 systemd[1]: Stopped kubelet.service. Feb 12 21:54:42.673772 systemd[1]: Reloading. Feb 12 21:54:42.769857 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-12T21:54:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:54:42.770389 /usr/lib/systemd/system-generators/torcx-generator[2327]: time="2024-02-12T21:54:42Z" level=info msg="torcx already run" Feb 12 21:54:42.876141 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:42.876166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:42.897919 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:43.026969 systemd[1]: Started kubelet.service. Feb 12 21:54:43.100258 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:43.100258 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:43.100757 kubelet[2386]: I0212 21:54:43.100329 2386 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:54:43.105897 kubelet[2386]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:43.105897 kubelet[2386]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:43.600625 kubelet[2386]: I0212 21:54:43.600588 2386 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:54:43.600625 kubelet[2386]: I0212 21:54:43.600617 2386 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:54:43.600965 kubelet[2386]: I0212 21:54:43.600891 2386 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:54:43.627771 kubelet[2386]: E0212 21:54:43.627736 2386 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.627942 kubelet[2386]: I0212 21:54:43.627808 2386 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:54:43.633143 kubelet[2386]: I0212 21:54:43.633103 2386 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:54:43.636198 kubelet[2386]: I0212 21:54:43.636160 2386 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:54:43.636437 kubelet[2386]: I0212 21:54:43.636369 2386 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:54:43.638480 kubelet[2386]: I0212 21:54:43.638456 2386 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:54:43.638561 kubelet[2386]: I0212 21:54:43.638485 2386 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:54:43.640923 kubelet[2386]: I0212 21:54:43.640898 2386 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:43.664887 kubelet[2386]: I0212 21:54:43.664843 2386 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:54:43.664887 kubelet[2386]: I0212 21:54:43.664888 2386 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:54:43.665555 kubelet[2386]: I0212 21:54:43.665532 2386 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:54:43.665647 kubelet[2386]: I0212 21:54:43.665566 2386 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:54:43.667787 kubelet[2386]: W0212 21:54:43.667741 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.667787 kubelet[2386]: E0212 21:54:43.667791 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.672899 kubelet[2386]: I0212 21:54:43.672864 2386 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:54:43.676108 kubelet[2386]: W0212 21:54:43.676081 2386 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 21:54:43.679168 kubelet[2386]: I0212 21:54:43.679141 2386 server.go:1186] "Started kubelet" Feb 12 21:54:43.679329 kubelet[2386]: W0212 21:54:43.679289 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.679398 kubelet[2386]: E0212 21:54:43.679338 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.680752 kubelet[2386]: I0212 21:54:43.680728 2386 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:54:43.682670 kubelet[2386]: I0212 21:54:43.682643 2386 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:54:43.684117 kubelet[2386]: E0212 21:54:43.683990 2386 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-174.17b33c3562070d1d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-174", UID:"ip-172-31-30-174", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-174"}, FirstTimestamp:time.Date(2024, time.February, 12, 21, 54, 43, 679112477, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 21, 54, 43, 679112477, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.30.174:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.174:6443: connect: connection refused'(may retry after sleeping) Feb 12 21:54:43.685840 kubelet[2386]: E0212 21:54:43.685821 2386 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:54:43.685929 kubelet[2386]: E0212 21:54:43.685851 2386 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:54:43.691248 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 21:54:43.692683 kubelet[2386]: I0212 21:54:43.692647 2386 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:54:43.695730 kubelet[2386]: I0212 21:54:43.695703 2386 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:54:43.697638 kubelet[2386]: I0212 21:54:43.697620 2386 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:54:43.699415 kubelet[2386]: W0212 21:54:43.699373 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.699701 kubelet[2386]: E0212 21:54:43.699420 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.701403 kubelet[2386]: E0212 21:54:43.701367 2386 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.810321 kubelet[2386]: I0212 21:54:43.810279 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:43.813777 kubelet[2386]: E0212 21:54:43.813755 2386 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.174:6443/api/v1/nodes\": dial tcp 172.31.30.174:6443: connect: connection refused" node="ip-172-31-30-174" Feb 12 21:54:43.819555 kubelet[2386]: I0212 21:54:43.819525 2386 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:54:43.819746 kubelet[2386]: I0212 21:54:43.819735 2386 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:54:43.819852 kubelet[2386]: I0212 21:54:43.819842 2386 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:43.822860 kubelet[2386]: I0212 21:54:43.822838 2386 policy_none.go:49] "None policy: Start" Feb 12 21:54:43.823889 kubelet[2386]: I0212 21:54:43.823869 2386 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:54:43.823987 kubelet[2386]: I0212 21:54:43.823907 2386 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:54:43.833704 kubelet[2386]: I0212 21:54:43.833667 2386 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:54:43.833938 kubelet[2386]: I0212 21:54:43.833915 2386 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:54:43.839038 kubelet[2386]: E0212 21:54:43.839008 2386 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-174\" not found" Feb 12 21:54:43.841937 kubelet[2386]: I0212 21:54:43.841915 2386 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:54:43.868710 kubelet[2386]: I0212 21:54:43.867406 2386 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:54:43.868710 kubelet[2386]: I0212 21:54:43.867430 2386 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:54:43.868710 kubelet[2386]: I0212 21:54:43.867455 2386 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:54:43.868710 kubelet[2386]: E0212 21:54:43.867509 2386 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 21:54:43.870009 kubelet[2386]: W0212 21:54:43.869974 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.870166 kubelet[2386]: E0212 21:54:43.870155 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.902182 kubelet[2386]: E0212 21:54:43.902142 2386 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:43.968477 kubelet[2386]: I0212 21:54:43.968432 2386 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:43.969950 kubelet[2386]: I0212 21:54:43.969922 2386 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:43.971893 kubelet[2386]: I0212 21:54:43.971869 2386 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:43.972733 kubelet[2386]: I0212 21:54:43.972713 2386 status_manager.go:698] "Failed to get status for pod" podUID=aa820bc9dff2bd5f94087573ced01974 pod="kube-system/kube-controller-manager-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:43.977606 kubelet[2386]: I0212 21:54:43.977582 2386 status_manager.go:698] "Failed to get status for pod" podUID=a71106d31a4561224b792766a9abdbb3 pod="kube-system/kube-scheduler-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:43.980634 kubelet[2386]: I0212 21:54:43.980614 2386 status_manager.go:698] "Failed to get status for pod" podUID=a64388074b337b386d22c7f055163ffd pod="kube-system/kube-apiserver-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:44.015740 kubelet[2386]: I0212 21:54:44.015703 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:44.016056 kubelet[2386]: E0212 21:54:44.016033 2386 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.174:6443/api/v1/nodes\": dial tcp 172.31.30.174:6443: connect: connection refused" node="ip-172-31-30-174" Feb 12 21:54:44.101375 kubelet[2386]: I0212 21:54:44.101336 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:44.101890 kubelet[2386]: I0212 21:54:44.101866 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71106d31a4561224b792766a9abdbb3-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-174\" (UID: \"a71106d31a4561224b792766a9abdbb3\") " pod="kube-system/kube-scheduler-ip-172-31-30-174" Feb 12 21:54:44.101986 kubelet[2386]: I0212 21:54:44.101938 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-ca-certs\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:44.102063 kubelet[2386]: I0212 21:54:44.101986 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:44.102063 kubelet[2386]: I0212 21:54:44.102024 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:44.102154 kubelet[2386]: I0212 21:54:44.102057 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:44.102154 kubelet[2386]: I0212 21:54:44.102117 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:44.102237 kubelet[2386]: I0212 21:54:44.102204 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:44.102332 kubelet[2386]: I0212 21:54:44.102251 2386 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:44.278216 env[1721]: time="2024-02-12T21:54:44.278166928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-174,Uid:aa820bc9dff2bd5f94087573ced01974,Namespace:kube-system,Attempt:0,}" Feb 12 21:54:44.278712 env[1721]: time="2024-02-12T21:54:44.278184217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-174,Uid:a71106d31a4561224b792766a9abdbb3,Namespace:kube-system,Attempt:0,}" Feb 12 21:54:44.282595 env[1721]: time="2024-02-12T21:54:44.281966143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-174,Uid:a64388074b337b386d22c7f055163ffd,Namespace:kube-system,Attempt:0,}" Feb 12 21:54:44.303494 kubelet[2386]: E0212 21:54:44.303450 2386 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.417579 kubelet[2386]: I0212 21:54:44.417543 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:44.417987 kubelet[2386]: E0212 21:54:44.417948 2386 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.174:6443/api/v1/nodes\": dial tcp 172.31.30.174:6443: connect: connection refused" node="ip-172-31-30-174" Feb 12 21:54:44.558055 kubelet[2386]: W0212 21:54:44.557922 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.558055 kubelet[2386]: E0212 21:54:44.557986 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.563391 kubelet[2386]: W0212 21:54:44.563347 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.563511 kubelet[2386]: E0212 21:54:44.563397 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.726018 kubelet[2386]: W0212 21:54:44.725936 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.726018 kubelet[2386]: E0212 21:54:44.726023 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.174:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:44.808061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754343156.mount: Deactivated successfully. Feb 12 21:54:44.823088 env[1721]: time="2024-02-12T21:54:44.823035658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.825017 env[1721]: time="2024-02-12T21:54:44.824974410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.832014 env[1721]: time="2024-02-12T21:54:44.831958920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.834602 env[1721]: time="2024-02-12T21:54:44.834556471Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.835782 env[1721]: time="2024-02-12T21:54:44.835746560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.839352 env[1721]: time="2024-02-12T21:54:44.839320881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.841845 env[1721]: time="2024-02-12T21:54:44.841794265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.843065 env[1721]: time="2024-02-12T21:54:44.842884927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.845106 env[1721]: time="2024-02-12T21:54:44.845070169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.853198 env[1721]: time="2024-02-12T21:54:44.853147798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.857015 env[1721]: time="2024-02-12T21:54:44.856969348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.863847 env[1721]: time="2024-02-12T21:54:44.863790869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:54:44.937764 env[1721]: time="2024-02-12T21:54:44.937669570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:54:44.937943 env[1721]: time="2024-02-12T21:54:44.937739397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:54:44.937943 env[1721]: time="2024-02-12T21:54:44.937754963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:54:44.938095 env[1721]: time="2024-02-12T21:54:44.937951465Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799 pid=2460 runtime=io.containerd.runc.v2 Feb 12 21:54:44.946120 env[1721]: time="2024-02-12T21:54:44.946027129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:54:44.946385 env[1721]: time="2024-02-12T21:54:44.946344191Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:54:44.947572 env[1721]: time="2024-02-12T21:54:44.946516903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:54:44.947912 env[1721]: time="2024-02-12T21:54:44.947876165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/863997d2b39daa99c3c901191b0afc89bac77edcfd0bb52d1c4189d4ce7a8650 pid=2473 runtime=io.containerd.runc.v2 Feb 12 21:54:44.966586 env[1721]: time="2024-02-12T21:54:44.966491734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:54:44.966586 env[1721]: time="2024-02-12T21:54:44.966541861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:54:44.966920 env[1721]: time="2024-02-12T21:54:44.966557388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:54:44.967486 env[1721]: time="2024-02-12T21:54:44.967398103Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285 pid=2500 runtime=io.containerd.runc.v2 Feb 12 21:54:45.107259 kubelet[2386]: E0212 21:54:45.104894 2386 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:45.164620 env[1721]: time="2024-02-12T21:54:45.160379088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-174,Uid:aa820bc9dff2bd5f94087573ced01974,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799\"" Feb 12 21:54:45.177557 env[1721]: time="2024-02-12T21:54:45.177509272Z" level=info msg="CreateContainer within sandbox \"cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 21:54:45.180285 env[1721]: time="2024-02-12T21:54:45.180222667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-174,Uid:a64388074b337b386d22c7f055163ffd,Namespace:kube-system,Attempt:0,} returns sandbox id \"863997d2b39daa99c3c901191b0afc89bac77edcfd0bb52d1c4189d4ce7a8650\"" Feb 12 21:54:45.187902 env[1721]: time="2024-02-12T21:54:45.187857919Z" level=info msg="CreateContainer within sandbox \"863997d2b39daa99c3c901191b0afc89bac77edcfd0bb52d1c4189d4ce7a8650\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 21:54:45.198567 env[1721]: time="2024-02-12T21:54:45.198517510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-174,Uid:a71106d31a4561224b792766a9abdbb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285\"" Feb 12 21:54:45.201312 env[1721]: time="2024-02-12T21:54:45.201249858Z" level=info msg="CreateContainer within sandbox \"7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 21:54:45.220456 kubelet[2386]: I0212 21:54:45.220430 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:45.251123 kubelet[2386]: E0212 21:54:45.220753 2386 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.174:6443/api/v1/nodes\": dial tcp 172.31.30.174:6443: connect: connection refused" node="ip-172-31-30-174" Feb 12 21:54:45.389293 env[1721]: time="2024-02-12T21:54:45.389156937Z" level=info msg="CreateContainer within sandbox \"cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2\"" Feb 12 21:54:45.390403 env[1721]: time="2024-02-12T21:54:45.390370225Z" level=info msg="StartContainer for \"12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2\"" Feb 12 21:54:45.420357 kubelet[2386]: W0212 21:54:45.420221 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:45.420357 kubelet[2386]: E0212 21:54:45.420317 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.174:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:45.429734 env[1721]: time="2024-02-12T21:54:45.429585213Z" level=info msg="CreateContainer within sandbox \"7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339\"" Feb 12 21:54:45.432982 env[1721]: time="2024-02-12T21:54:45.432504672Z" level=info msg="StartContainer for \"313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339\"" Feb 12 21:54:45.454654 env[1721]: time="2024-02-12T21:54:45.454594788Z" level=info msg="CreateContainer within sandbox \"863997d2b39daa99c3c901191b0afc89bac77edcfd0bb52d1c4189d4ce7a8650\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"abbebf5334645e03b10fca62856d188fd0473f596e0bf8322c5903213cf856d5\"" Feb 12 21:54:45.455770 env[1721]: time="2024-02-12T21:54:45.455715570Z" level=info msg="StartContainer for \"abbebf5334645e03b10fca62856d188fd0473f596e0bf8322c5903213cf856d5\"" Feb 12 21:54:45.532604 env[1721]: time="2024-02-12T21:54:45.532530368Z" level=info msg="StartContainer for \"12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2\" returns successfully" Feb 12 21:54:45.577973 env[1721]: time="2024-02-12T21:54:45.577818014Z" level=info msg="StartContainer for \"313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339\" returns successfully" Feb 12 21:54:45.639656 env[1721]: time="2024-02-12T21:54:45.639543547Z" level=info msg="StartContainer for \"abbebf5334645e03b10fca62856d188fd0473f596e0bf8322c5903213cf856d5\" returns successfully" Feb 12 21:54:45.805726 kubelet[2386]: E0212 21:54:45.805699 2386 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.174:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:45.882158 kubelet[2386]: I0212 21:54:45.882133 2386 status_manager.go:698] "Failed to get status for pod" podUID=a71106d31a4561224b792766a9abdbb3 pod="kube-system/kube-scheduler-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:45.886204 kubelet[2386]: I0212 21:54:45.886177 2386 status_manager.go:698] "Failed to get status for pod" podUID=a64388074b337b386d22c7f055163ffd pod="kube-system/kube-apiserver-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:45.891349 kubelet[2386]: I0212 21:54:45.890820 2386 status_manager.go:698] "Failed to get status for pod" podUID=aa820bc9dff2bd5f94087573ced01974 pod="kube-system/kube-controller-manager-ip-172-31-30-174" err="Get \"https://172.31.30.174:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-30-174\": dial tcp 172.31.30.174:6443: connect: connection refused" Feb 12 21:54:46.659035 kubelet[2386]: W0212 21:54:46.658961 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:46.659554 kubelet[2386]: E0212 21:54:46.659049 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.174:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:46.705649 kubelet[2386]: E0212 21:54:46.705601 2386 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: Get "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:46.818562 kubelet[2386]: W0212 21:54:46.818490 2386 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:46.818773 kubelet[2386]: E0212 21:54:46.818761 2386 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.174:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-174&limit=500&resourceVersion=0": dial tcp 172.31.30.174:6443: connect: connection refused Feb 12 21:54:46.823078 kubelet[2386]: I0212 21:54:46.823056 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:46.823898 kubelet[2386]: E0212 21:54:46.823881 2386 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.174:6443/api/v1/nodes\": dial tcp 172.31.30.174:6443: connect: connection refused" node="ip-172-31-30-174" Feb 12 21:54:49.912147 kubelet[2386]: E0212 21:54:49.912114 2386 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-174\" not found" node="ip-172-31-30-174" Feb 12 21:54:50.027534 kubelet[2386]: I0212 21:54:50.027496 2386 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:50.049049 kubelet[2386]: I0212 21:54:50.049001 2386 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-174" Feb 12 21:54:50.058991 kubelet[2386]: E0212 21:54:50.058960 2386 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-174\" not found" Feb 12 21:54:50.159954 kubelet[2386]: E0212 21:54:50.159905 2386 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-30-174\" not found" Feb 12 21:54:50.672796 kubelet[2386]: I0212 21:54:50.672747 2386 apiserver.go:52] "Watching apiserver" Feb 12 21:54:50.699057 kubelet[2386]: I0212 21:54:50.698978 2386 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:54:50.759467 kubelet[2386]: I0212 21:54:50.759426 2386 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:54:52.113380 update_engine[1710]: I0212 21:54:52.113329 1710 update_attempter.cc:509] Updating boot flags... Feb 12 21:54:52.687240 systemd[1]: Reloading. Feb 12 21:54:52.806442 /usr/lib/systemd/system-generators/torcx-generator[2900]: time="2024-02-12T21:54:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 21:54:52.806477 /usr/lib/systemd/system-generators/torcx-generator[2900]: time="2024-02-12T21:54:52Z" level=info msg="torcx already run" Feb 12 21:54:52.922016 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 21:54:52.922260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 21:54:52.946429 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 21:54:53.081148 systemd[1]: Stopping kubelet.service... Feb 12 21:54:53.096996 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 21:54:53.097840 systemd[1]: Stopped kubelet.service. Feb 12 21:54:53.102202 systemd[1]: Started kubelet.service. Feb 12 21:54:53.248412 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:53.248897 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:53.249098 kubelet[2960]: I0212 21:54:53.249063 2960 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 21:54:53.250320 sudo[2971]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 21:54:53.250635 sudo[2971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 21:54:53.254776 kubelet[2960]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 21:54:53.254878 kubelet[2960]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 21:54:53.260331 kubelet[2960]: I0212 21:54:53.260303 2960 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 21:54:53.260738 kubelet[2960]: I0212 21:54:53.260722 2960 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 21:54:53.261169 kubelet[2960]: I0212 21:54:53.261155 2960 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 21:54:53.265574 kubelet[2960]: I0212 21:54:53.265549 2960 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 21:54:53.282810 kubelet[2960]: I0212 21:54:53.282778 2960 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 21:54:53.289874 kubelet[2960]: I0212 21:54:53.289847 2960 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 21:54:53.292067 kubelet[2960]: I0212 21:54:53.292042 2960 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 21:54:53.292493 kubelet[2960]: I0212 21:54:53.292477 2960 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 21:54:53.293044 kubelet[2960]: I0212 21:54:53.293023 2960 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 21:54:53.293155 kubelet[2960]: I0212 21:54:53.293146 2960 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 21:54:53.293308 kubelet[2960]: I0212 21:54:53.293298 2960 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:53.303452 kubelet[2960]: I0212 21:54:53.303430 2960 kubelet.go:398] "Attempting to sync node with API server" Feb 12 21:54:53.303645 kubelet[2960]: I0212 21:54:53.303635 2960 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 21:54:53.303922 kubelet[2960]: I0212 21:54:53.303909 2960 kubelet.go:297] "Adding apiserver pod source" Feb 12 21:54:53.304010 kubelet[2960]: I0212 21:54:53.304002 2960 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 21:54:53.310746 kubelet[2960]: I0212 21:54:53.310721 2960 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 21:54:53.311559 kubelet[2960]: I0212 21:54:53.311542 2960 server.go:1186] "Started kubelet" Feb 12 21:54:53.334813 kubelet[2960]: I0212 21:54:53.334787 2960 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 21:54:53.336045 kubelet[2960]: I0212 21:54:53.336023 2960 server.go:451] "Adding debug handlers to kubelet server" Feb 12 21:54:53.337861 kubelet[2960]: I0212 21:54:53.337837 2960 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 21:54:53.350907 kubelet[2960]: I0212 21:54:53.350874 2960 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 21:54:53.351914 kubelet[2960]: I0212 21:54:53.351893 2960 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 21:54:53.399304 kubelet[2960]: E0212 21:54:53.399280 2960 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 21:54:53.399489 kubelet[2960]: E0212 21:54:53.399479 2960 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 21:54:53.494291 kubelet[2960]: I0212 21:54:53.493289 2960 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-174" Feb 12 21:54:53.518210 kubelet[2960]: I0212 21:54:53.510228 2960 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-174" Feb 12 21:54:53.518210 kubelet[2960]: I0212 21:54:53.510843 2960 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-174" Feb 12 21:54:53.553744 kubelet[2960]: I0212 21:54:53.550323 2960 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 21:54:53.590655 kubelet[2960]: I0212 21:54:53.586797 2960 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 21:54:53.590655 kubelet[2960]: I0212 21:54:53.586821 2960 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 21:54:53.590655 kubelet[2960]: I0212 21:54:53.586841 2960 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 21:54:53.590655 kubelet[2960]: E0212 21:54:53.586908 2960 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 21:54:53.687576 kubelet[2960]: E0212 21:54:53.687540 2960 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 12 21:54:53.693685 kubelet[2960]: I0212 21:54:53.693663 2960 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 21:54:53.693831 kubelet[2960]: I0212 21:54:53.693824 2960 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 21:54:53.693892 kubelet[2960]: I0212 21:54:53.693886 2960 state_mem.go:36] "Initialized new in-memory state store" Feb 12 21:54:53.694121 kubelet[2960]: I0212 21:54:53.694109 2960 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 21:54:53.694206 kubelet[2960]: I0212 21:54:53.694200 2960 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 21:54:53.694288 kubelet[2960]: I0212 21:54:53.694277 2960 policy_none.go:49] "None policy: Start" Feb 12 21:54:53.695283 kubelet[2960]: I0212 21:54:53.695259 2960 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 21:54:53.695408 kubelet[2960]: I0212 21:54:53.695399 2960 state_mem.go:35] "Initializing new in-memory state store" Feb 12 21:54:53.695657 kubelet[2960]: I0212 21:54:53.695645 2960 state_mem.go:75] "Updated machine memory state" Feb 12 21:54:53.697674 kubelet[2960]: I0212 21:54:53.697621 2960 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 21:54:53.703987 kubelet[2960]: I0212 21:54:53.703965 2960 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 21:54:53.888592 kubelet[2960]: I0212 21:54:53.888492 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:53.888790 kubelet[2960]: I0212 21:54:53.888600 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:53.888790 kubelet[2960]: I0212 21:54:53.888700 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:54:53.899933 kubelet[2960]: E0212 21:54:53.899901 2960 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-174\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.984636 kubelet[2960]: I0212 21:54:53.984553 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:53.984913 kubelet[2960]: I0212 21:54:53.984892 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.985005 kubelet[2960]: I0212 21:54:53.984952 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.985005 kubelet[2960]: I0212 21:54:53.984995 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.985107 kubelet[2960]: I0212 21:54:53.985038 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.985107 kubelet[2960]: I0212 21:54:53.985073 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-ca-certs\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:53.985107 kubelet[2960]: I0212 21:54:53.985105 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64388074b337b386d22c7f055163ffd-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-174\" (UID: \"a64388074b337b386d22c7f055163ffd\") " pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:53.985227 kubelet[2960]: I0212 21:54:53.985138 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/aa820bc9dff2bd5f94087573ced01974-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-174\" (UID: \"aa820bc9dff2bd5f94087573ced01974\") " pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:53.985227 kubelet[2960]: I0212 21:54:53.985170 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a71106d31a4561224b792766a9abdbb3-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-174\" (UID: \"a71106d31a4561224b792766a9abdbb3\") " pod="kube-system/kube-scheduler-ip-172-31-30-174" Feb 12 21:54:54.160798 sudo[2971]: pam_unix(sudo:session): session closed for user root Feb 12 21:54:54.324986 kubelet[2960]: I0212 21:54:54.324933 2960 apiserver.go:52] "Watching apiserver" Feb 12 21:54:54.352674 kubelet[2960]: I0212 21:54:54.352634 2960 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 21:54:54.386909 kubelet[2960]: I0212 21:54:54.386872 2960 reconciler.go:41] "Reconciler: start to sync state" Feb 12 21:54:54.652667 kubelet[2960]: E0212 21:54:54.652494 2960 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-174\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-174" Feb 12 21:54:54.913496 kubelet[2960]: E0212 21:54:54.913336 2960 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-174\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-174" Feb 12 21:54:55.111190 kubelet[2960]: E0212 21:54:55.111153 2960 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-174\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-174" Feb 12 21:54:55.716141 kubelet[2960]: I0212 21:54:55.716094 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-174" podStartSLOduration=2.7160191129999998 pod.CreationTimestamp="2024-02-12 21:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:55.715910652 +0000 UTC m=+2.599266052" watchObservedRunningTime="2024-02-12 21:54:55.716019113 +0000 UTC m=+2.599374514" Feb 12 21:54:55.716654 kubelet[2960]: I0212 21:54:55.716223 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-174" podStartSLOduration=2.716199597 pod.CreationTimestamp="2024-02-12 21:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:55.321664876 +0000 UTC m=+2.205020274" watchObservedRunningTime="2024-02-12 21:54:55.716199597 +0000 UTC m=+2.599554991" Feb 12 21:54:55.972385 sudo[1974]: pam_unix(sudo:session): session closed for user root Feb 12 21:54:55.997947 sshd[1970]: pam_unix(sshd:session): session closed for user core Feb 12 21:54:56.004983 systemd[1]: sshd@4-172.31.30.174:22-139.178.89.65:50940.service: Deactivated successfully. Feb 12 21:54:56.006361 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 21:54:56.011137 systemd-logind[1709]: Session 5 logged out. Waiting for processes to exit. Feb 12 21:54:56.017187 systemd-logind[1709]: Removed session 5. Feb 12 21:54:59.140227 kubelet[2960]: I0212 21:54:59.140198 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-174" podStartSLOduration=7.140153373 pod.CreationTimestamp="2024-02-12 21:54:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:54:56.112309242 +0000 UTC m=+2.995664667" watchObservedRunningTime="2024-02-12 21:54:59.140153373 +0000 UTC m=+6.023508761" Feb 12 21:54:59.782765 amazon-ssm-agent[1690]: 2024-02-12 21:54:59 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 21:55:05.187614 kubelet[2960]: I0212 21:55:05.187582 2960 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 21:55:05.188219 env[1721]: time="2024-02-12T21:55:05.188153504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 21:55:05.188792 kubelet[2960]: I0212 21:55:05.188768 2960 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 21:55:05.833979 kubelet[2960]: I0212 21:55:05.833946 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:05.843999 kubelet[2960]: W0212 21:55:05.843960 2960 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-30-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-174' and this object Feb 12 21:55:05.844179 kubelet[2960]: E0212 21:55:05.844019 2960 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-30-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-174' and this object Feb 12 21:55:05.844179 kubelet[2960]: W0212 21:55:05.844122 2960 reflector.go:424] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-174' and this object Feb 12 21:55:05.844179 kubelet[2960]: E0212 21:55:05.844136 2960 reflector.go:140] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-30-174" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-174' and this object Feb 12 21:55:05.869613 kubelet[2960]: I0212 21:55:05.869575 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-proxy\") pod \"kube-proxy-hsm7s\" (UID: \"bd29c4dd-c02b-4daf-83b9-4221797f85cd\") " pod="kube-system/kube-proxy-hsm7s" Feb 12 21:55:05.870370 kubelet[2960]: I0212 21:55:05.869649 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd29c4dd-c02b-4daf-83b9-4221797f85cd-lib-modules\") pod \"kube-proxy-hsm7s\" (UID: \"bd29c4dd-c02b-4daf-83b9-4221797f85cd\") " pod="kube-system/kube-proxy-hsm7s" Feb 12 21:55:05.870370 kubelet[2960]: I0212 21:55:05.869694 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d9gq\" (UniqueName: \"kubernetes.io/projected/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-api-access-7d9gq\") pod \"kube-proxy-hsm7s\" (UID: \"bd29c4dd-c02b-4daf-83b9-4221797f85cd\") " pod="kube-system/kube-proxy-hsm7s" Feb 12 21:55:05.870370 kubelet[2960]: I0212 21:55:05.870334 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd29c4dd-c02b-4daf-83b9-4221797f85cd-xtables-lock\") pod \"kube-proxy-hsm7s\" (UID: \"bd29c4dd-c02b-4daf-83b9-4221797f85cd\") " pod="kube-system/kube-proxy-hsm7s" Feb 12 21:55:05.871460 kubelet[2960]: I0212 21:55:05.871435 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:05.971466 kubelet[2960]: I0212 21:55:05.971420 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-lib-modules\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971466 kubelet[2960]: I0212 21:55:05.971472 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f628668-03a9-4cc6-8e97-285241c99d8e-clustermesh-secrets\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971504 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-hubble-tls\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971546 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-xtables-lock\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971574 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-net\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971634 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-run\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971665 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cni-path\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.971777 kubelet[2960]: I0212 21:55:05.971723 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-cgroup\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971757 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-etc-cni-netd\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971786 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-kernel\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971816 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-bpf-maps\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971846 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-hostproc\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971875 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-config-path\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:05.972058 kubelet[2960]: I0212 21:55:05.971922 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlsx5\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5\") pod \"cilium-j9v7d\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " pod="kube-system/cilium-j9v7d" Feb 12 21:55:06.131160 kubelet[2960]: I0212 21:55:06.131042 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:06.179875 kubelet[2960]: I0212 21:55:06.179819 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/103104ca-5420-4915-8ff6-15f792c97e6c-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-vrtdn\" (UID: \"103104ca-5420-4915-8ff6-15f792c97e6c\") " pod="kube-system/cilium-operator-f59cbd8c6-vrtdn" Feb 12 21:55:06.180159 kubelet[2960]: I0212 21:55:06.180146 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzn7\" (UniqueName: \"kubernetes.io/projected/103104ca-5420-4915-8ff6-15f792c97e6c-kube-api-access-jtzn7\") pod \"cilium-operator-f59cbd8c6-vrtdn\" (UID: \"103104ca-5420-4915-8ff6-15f792c97e6c\") " pod="kube-system/cilium-operator-f59cbd8c6-vrtdn" Feb 12 21:55:06.972790 kubelet[2960]: E0212 21:55:06.972749 2960 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:06.973338 kubelet[2960]: E0212 21:55:06.972864 2960 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-proxy podName:bd29c4dd-c02b-4daf-83b9-4221797f85cd nodeName:}" failed. No retries permitted until 2024-02-12 21:55:07.472839758 +0000 UTC m=+14.356195155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-proxy") pod "kube-proxy-hsm7s" (UID: "bd29c4dd-c02b-4daf-83b9-4221797f85cd") : failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:06.988991 kubelet[2960]: E0212 21:55:06.988945 2960 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:06.988991 kubelet[2960]: E0212 21:55:06.988991 2960 projected.go:198] Error preparing data for projected volume kube-api-access-7d9gq for pod kube-system/kube-proxy-hsm7s: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:06.989215 kubelet[2960]: E0212 21:55:06.989085 2960 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-api-access-7d9gq podName:bd29c4dd-c02b-4daf-83b9-4221797f85cd nodeName:}" failed. No retries permitted until 2024-02-12 21:55:07.489061086 +0000 UTC m=+14.372416481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7d9gq" (UniqueName: "kubernetes.io/projected/bd29c4dd-c02b-4daf-83b9-4221797f85cd-kube-api-access-7d9gq") pod "kube-proxy-hsm7s" (UID: "bd29c4dd-c02b-4daf-83b9-4221797f85cd") : failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:07.109847 kubelet[2960]: E0212 21:55:07.109790 2960 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:07.109847 kubelet[2960]: E0212 21:55:07.109851 2960 projected.go:198] Error preparing data for projected volume kube-api-access-xlsx5 for pod kube-system/cilium-j9v7d: failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:07.110068 kubelet[2960]: E0212 21:55:07.109925 2960 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5 podName:2f628668-03a9-4cc6-8e97-285241c99d8e nodeName:}" failed. No retries permitted until 2024-02-12 21:55:07.609902244 +0000 UTC m=+14.493257645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xlsx5" (UniqueName: "kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5") pod "cilium-j9v7d" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e") : failed to sync configmap cache: timed out waiting for the condition Feb 12 21:55:07.636653 env[1721]: time="2024-02-12T21:55:07.636598685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vrtdn,Uid:103104ca-5420-4915-8ff6-15f792c97e6c,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:07.639874 env[1721]: time="2024-02-12T21:55:07.639835811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hsm7s,Uid:bd29c4dd-c02b-4daf-83b9-4221797f85cd,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:07.687769 env[1721]: time="2024-02-12T21:55:07.687681014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:07.687971 env[1721]: time="2024-02-12T21:55:07.687747265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:07.687971 env[1721]: time="2024-02-12T21:55:07.687762631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:07.687971 env[1721]: time="2024-02-12T21:55:07.687926096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f pid=3068 runtime=io.containerd.runc.v2 Feb 12 21:55:07.690007 env[1721]: time="2024-02-12T21:55:07.689841165Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:07.690007 env[1721]: time="2024-02-12T21:55:07.689983584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:07.690449 env[1721]: time="2024-02-12T21:55:07.690404043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:07.690817 env[1721]: time="2024-02-12T21:55:07.690768907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8581cca39bd9181900b2b6345ed93ddd764e90a8aa192765fb7099467ec1f2bd pid=3076 runtime=io.containerd.runc.v2 Feb 12 21:55:07.789250 env[1721]: time="2024-02-12T21:55:07.789200153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hsm7s,Uid:bd29c4dd-c02b-4daf-83b9-4221797f85cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"8581cca39bd9181900b2b6345ed93ddd764e90a8aa192765fb7099467ec1f2bd\"" Feb 12 21:55:07.793452 env[1721]: time="2024-02-12T21:55:07.793377786Z" level=info msg="CreateContainer within sandbox \"8581cca39bd9181900b2b6345ed93ddd764e90a8aa192765fb7099467ec1f2bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 21:55:07.821662 env[1721]: time="2024-02-12T21:55:07.821617993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-vrtdn,Uid:103104ca-5420-4915-8ff6-15f792c97e6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\"" Feb 12 21:55:07.825453 env[1721]: time="2024-02-12T21:55:07.825392449Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 21:55:07.841579 env[1721]: time="2024-02-12T21:55:07.841536886Z" level=info msg="CreateContainer within sandbox \"8581cca39bd9181900b2b6345ed93ddd764e90a8aa192765fb7099467ec1f2bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"70edbd093ce5500e327307981feea7c25ae073ab5eedf2969ff8ef83b445b6aa\"" Feb 12 21:55:07.844600 env[1721]: time="2024-02-12T21:55:07.844550306Z" level=info msg="StartContainer for \"70edbd093ce5500e327307981feea7c25ae073ab5eedf2969ff8ef83b445b6aa\"" Feb 12 21:55:07.923113 env[1721]: time="2024-02-12T21:55:07.923059198Z" level=info msg="StartContainer for \"70edbd093ce5500e327307981feea7c25ae073ab5eedf2969ff8ef83b445b6aa\" returns successfully" Feb 12 21:55:07.983201 env[1721]: time="2024-02-12T21:55:07.983159098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9v7d,Uid:2f628668-03a9-4cc6-8e97-285241c99d8e,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:08.049409 env[1721]: time="2024-02-12T21:55:08.048936549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:08.049609 env[1721]: time="2024-02-12T21:55:08.049443728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:08.049609 env[1721]: time="2024-02-12T21:55:08.049486418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:08.049911 env[1721]: time="2024-02-12T21:55:08.049703928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b pid=3183 runtime=io.containerd.runc.v2 Feb 12 21:55:08.132381 env[1721]: time="2024-02-12T21:55:08.132333170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9v7d,Uid:2f628668-03a9-4cc6-8e97-285241c99d8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\"" Feb 12 21:55:09.000332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4046126626.mount: Deactivated successfully. Feb 12 21:55:10.414894 env[1721]: time="2024-02-12T21:55:10.414841183Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:10.418707 env[1721]: time="2024-02-12T21:55:10.418644541Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:10.421598 env[1721]: time="2024-02-12T21:55:10.421558342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:10.423036 env[1721]: time="2024-02-12T21:55:10.422990037Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 21:55:10.427808 env[1721]: time="2024-02-12T21:55:10.427765135Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 21:55:10.430559 env[1721]: time="2024-02-12T21:55:10.430516280Z" level=info msg="CreateContainer within sandbox \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 21:55:10.455142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965039491.mount: Deactivated successfully. Feb 12 21:55:10.484401 env[1721]: time="2024-02-12T21:55:10.484340206Z" level=info msg="CreateContainer within sandbox \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\"" Feb 12 21:55:10.489617 env[1721]: time="2024-02-12T21:55:10.487425910Z" level=info msg="StartContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\"" Feb 12 21:55:10.603289 env[1721]: time="2024-02-12T21:55:10.599350605Z" level=info msg="StartContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" returns successfully" Feb 12 21:55:10.723939 kubelet[2960]: I0212 21:55:10.723824 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hsm7s" podStartSLOduration=5.722759595 pod.CreationTimestamp="2024-02-12 21:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:08.710649943 +0000 UTC m=+15.594005344" watchObservedRunningTime="2024-02-12 21:55:10.722759595 +0000 UTC m=+17.606114992" Feb 12 21:55:17.593901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692849241.mount: Deactivated successfully. Feb 12 21:55:21.466576 env[1721]: time="2024-02-12T21:55:21.466505627Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:21.475518 env[1721]: time="2024-02-12T21:55:21.475476339Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:21.485301 env[1721]: time="2024-02-12T21:55:21.485241576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 21:55:21.485886 env[1721]: time="2024-02-12T21:55:21.485849277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 21:55:21.498334 env[1721]: time="2024-02-12T21:55:21.498291833Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:55:21.518014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208019195.mount: Deactivated successfully. Feb 12 21:55:21.528701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707969236.mount: Deactivated successfully. Feb 12 21:55:21.532155 env[1721]: time="2024-02-12T21:55:21.532016754Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\"" Feb 12 21:55:21.533423 env[1721]: time="2024-02-12T21:55:21.533386827Z" level=info msg="StartContainer for \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\"" Feb 12 21:55:21.645597 env[1721]: time="2024-02-12T21:55:21.641682361Z" level=info msg="StartContainer for \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\" returns successfully" Feb 12 21:55:21.851186 kubelet[2960]: I0212 21:55:21.848516 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-vrtdn" podStartSLOduration=-9.22337202100721e+09 pod.CreationTimestamp="2024-02-12 21:55:06 +0000 UTC" firstStartedPulling="2024-02-12 21:55:07.823225963 +0000 UTC m=+14.706581351" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:10.728852904 +0000 UTC m=+17.612208304" watchObservedRunningTime="2024-02-12 21:55:21.847566469 +0000 UTC m=+28.730921869" Feb 12 21:55:21.887486 env[1721]: time="2024-02-12T21:55:21.887433190Z" level=info msg="shim disconnected" id=49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c Feb 12 21:55:21.887486 env[1721]: time="2024-02-12T21:55:21.887480719Z" level=warning msg="cleaning up after shim disconnected" id=49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c namespace=k8s.io Feb 12 21:55:21.887486 env[1721]: time="2024-02-12T21:55:21.887493343Z" level=info msg="cleaning up dead shim" Feb 12 21:55:21.907055 env[1721]: time="2024-02-12T21:55:21.907009088Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3413 runtime=io.containerd.runc.v2\n" Feb 12 21:55:22.511148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c-rootfs.mount: Deactivated successfully. Feb 12 21:55:22.779206 env[1721]: time="2024-02-12T21:55:22.776071134Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:55:22.818624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount276916094.mount: Deactivated successfully. Feb 12 21:55:22.837173 env[1721]: time="2024-02-12T21:55:22.836655295Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\"" Feb 12 21:55:22.841696 env[1721]: time="2024-02-12T21:55:22.841655488Z" level=info msg="StartContainer for \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\"" Feb 12 21:55:22.949527 env[1721]: time="2024-02-12T21:55:22.948505333Z" level=info msg="StartContainer for \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\" returns successfully" Feb 12 21:55:22.965293 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 21:55:22.965975 systemd[1]: Stopped systemd-sysctl.service. Feb 12 21:55:22.966836 systemd[1]: Stopping systemd-sysctl.service... Feb 12 21:55:22.972842 systemd[1]: Starting systemd-sysctl.service... Feb 12 21:55:23.002503 systemd[1]: Finished systemd-sysctl.service. Feb 12 21:55:23.026411 env[1721]: time="2024-02-12T21:55:23.026358274Z" level=info msg="shim disconnected" id=d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69 Feb 12 21:55:23.026411 env[1721]: time="2024-02-12T21:55:23.026410051Z" level=warning msg="cleaning up after shim disconnected" id=d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69 namespace=k8s.io Feb 12 21:55:23.026841 env[1721]: time="2024-02-12T21:55:23.026423055Z" level=info msg="cleaning up dead shim" Feb 12 21:55:23.038878 env[1721]: time="2024-02-12T21:55:23.038199647Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3480 runtime=io.containerd.runc.v2\n" Feb 12 21:55:23.512158 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69-rootfs.mount: Deactivated successfully. Feb 12 21:55:23.809384 env[1721]: time="2024-02-12T21:55:23.795508955Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:55:23.846135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2328056550.mount: Deactivated successfully. Feb 12 21:55:23.861240 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715054149.mount: Deactivated successfully. Feb 12 21:55:23.869871 env[1721]: time="2024-02-12T21:55:23.869743489Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\"" Feb 12 21:55:23.879493 env[1721]: time="2024-02-12T21:55:23.879451483Z" level=info msg="StartContainer for \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\"" Feb 12 21:55:23.965733 env[1721]: time="2024-02-12T21:55:23.965618978Z" level=info msg="StartContainer for \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\" returns successfully" Feb 12 21:55:24.055983 env[1721]: time="2024-02-12T21:55:24.055927324Z" level=info msg="shim disconnected" id=dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833 Feb 12 21:55:24.055983 env[1721]: time="2024-02-12T21:55:24.055981308Z" level=warning msg="cleaning up after shim disconnected" id=dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833 namespace=k8s.io Feb 12 21:55:24.056498 env[1721]: time="2024-02-12T21:55:24.055993672Z" level=info msg="cleaning up dead shim" Feb 12 21:55:24.065717 env[1721]: time="2024-02-12T21:55:24.065589031Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3538 runtime=io.containerd.runc.v2\n" Feb 12 21:55:24.803839 env[1721]: time="2024-02-12T21:55:24.803793134Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:55:24.830456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520565097.mount: Deactivated successfully. Feb 12 21:55:24.852282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1751209813.mount: Deactivated successfully. Feb 12 21:55:24.860828 env[1721]: time="2024-02-12T21:55:24.860777059Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\"" Feb 12 21:55:24.863790 env[1721]: time="2024-02-12T21:55:24.862289414Z" level=info msg="StartContainer for \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\"" Feb 12 21:55:24.960716 env[1721]: time="2024-02-12T21:55:24.960679341Z" level=info msg="StartContainer for \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\" returns successfully" Feb 12 21:55:24.998624 env[1721]: time="2024-02-12T21:55:24.998569516Z" level=info msg="shim disconnected" id=8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63 Feb 12 21:55:24.998624 env[1721]: time="2024-02-12T21:55:24.998627076Z" level=warning msg="cleaning up after shim disconnected" id=8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63 namespace=k8s.io Feb 12 21:55:24.998993 env[1721]: time="2024-02-12T21:55:24.998638898Z" level=info msg="cleaning up dead shim" Feb 12 21:55:25.013169 env[1721]: time="2024-02-12T21:55:25.013118967Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:55:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3594 runtime=io.containerd.runc.v2\n" Feb 12 21:55:25.811308 env[1721]: time="2024-02-12T21:55:25.806750134Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:55:25.829774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount976212071.mount: Deactivated successfully. Feb 12 21:55:25.846125 env[1721]: time="2024-02-12T21:55:25.846075598Z" level=info msg="CreateContainer within sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\"" Feb 12 21:55:25.846970 env[1721]: time="2024-02-12T21:55:25.846937982Z" level=info msg="StartContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\"" Feb 12 21:55:25.921518 env[1721]: time="2024-02-12T21:55:25.921466499Z" level=info msg="StartContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" returns successfully" Feb 12 21:55:26.096364 kubelet[2960]: I0212 21:55:26.096126 2960 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 21:55:26.139464 kubelet[2960]: I0212 21:55:26.139426 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:26.144415 kubelet[2960]: I0212 21:55:26.142553 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:55:26.244208 kubelet[2960]: I0212 21:55:26.244170 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4dk\" (UniqueName: \"kubernetes.io/projected/e33627f4-231f-4b4b-a8d8-31cc5d9d2634-kube-api-access-wt4dk\") pod \"coredns-787d4945fb-r2nbb\" (UID: \"e33627f4-231f-4b4b-a8d8-31cc5d9d2634\") " pod="kube-system/coredns-787d4945fb-r2nbb" Feb 12 21:55:26.244478 kubelet[2960]: I0212 21:55:26.244458 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e33627f4-231f-4b4b-a8d8-31cc5d9d2634-config-volume\") pod \"coredns-787d4945fb-r2nbb\" (UID: \"e33627f4-231f-4b4b-a8d8-31cc5d9d2634\") " pod="kube-system/coredns-787d4945fb-r2nbb" Feb 12 21:55:26.244572 kubelet[2960]: I0212 21:55:26.244499 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0906e58e-9342-4660-9e0c-5bf0413a156a-config-volume\") pod \"coredns-787d4945fb-ssbhg\" (UID: \"0906e58e-9342-4660-9e0c-5bf0413a156a\") " pod="kube-system/coredns-787d4945fb-ssbhg" Feb 12 21:55:26.244572 kubelet[2960]: I0212 21:55:26.244542 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxf75\" (UniqueName: \"kubernetes.io/projected/0906e58e-9342-4660-9e0c-5bf0413a156a-kube-api-access-rxf75\") pod \"coredns-787d4945fb-ssbhg\" (UID: \"0906e58e-9342-4660-9e0c-5bf0413a156a\") " pod="kube-system/coredns-787d4945fb-ssbhg" Feb 12 21:55:26.455575 env[1721]: time="2024-02-12T21:55:26.455522903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ssbhg,Uid:0906e58e-9342-4660-9e0c-5bf0413a156a,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:26.456405 env[1721]: time="2024-02-12T21:55:26.456371018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r2nbb,Uid:e33627f4-231f-4b4b-a8d8-31cc5d9d2634,Namespace:kube-system,Attempt:0,}" Feb 12 21:55:26.839347 kubelet[2960]: I0212 21:55:26.839068 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j9v7d" podStartSLOduration=-9.223372015015757e+09 pod.CreationTimestamp="2024-02-12 21:55:05 +0000 UTC" firstStartedPulling="2024-02-12 21:55:08.134421247 +0000 UTC m=+15.017776626" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:26.836708453 +0000 UTC m=+33.720063853" watchObservedRunningTime="2024-02-12 21:55:26.839019307 +0000 UTC m=+33.722374707" Feb 12 21:55:28.417479 systemd-networkd[1507]: cilium_host: Link UP Feb 12 21:55:28.422466 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 21:55:28.423946 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 21:55:28.422652 systemd-networkd[1507]: cilium_net: Link UP Feb 12 21:55:28.423396 systemd-networkd[1507]: cilium_net: Gained carrier Feb 12 21:55:28.423608 systemd-networkd[1507]: cilium_host: Gained carrier Feb 12 21:55:28.425198 (udev-worker)[3754]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:28.426828 (udev-worker)[3717]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:28.598443 systemd-networkd[1507]: cilium_net: Gained IPv6LL Feb 12 21:55:28.632961 (udev-worker)[3776]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:28.641005 systemd-networkd[1507]: cilium_vxlan: Link UP Feb 12 21:55:28.641013 systemd-networkd[1507]: cilium_vxlan: Gained carrier Feb 12 21:55:29.227290 kernel: NET: Registered PF_ALG protocol family Feb 12 21:55:29.382418 systemd-networkd[1507]: cilium_host: Gained IPv6LL Feb 12 21:55:29.958429 systemd-networkd[1507]: cilium_vxlan: Gained IPv6LL Feb 12 21:55:30.170204 systemd-networkd[1507]: lxc_health: Link UP Feb 12 21:55:30.192635 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:55:30.191632 systemd-networkd[1507]: lxc_health: Gained carrier Feb 12 21:55:30.601006 systemd-networkd[1507]: lxcc10c80226aca: Link UP Feb 12 21:55:30.617536 systemd-networkd[1507]: lxc4f260f67882b: Link UP Feb 12 21:55:30.624328 kernel: eth0: renamed from tmp2494c Feb 12 21:55:30.630287 kernel: eth0: renamed from tmpca207 Feb 12 21:55:30.637168 systemd-networkd[1507]: lxcc10c80226aca: Gained carrier Feb 12 21:55:30.643289 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc10c80226aca: link becomes ready Feb 12 21:55:30.646284 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4f260f67882b: link becomes ready Feb 12 21:55:30.646692 systemd-networkd[1507]: lxc4f260f67882b: Gained carrier Feb 12 21:55:30.663288 (udev-worker)[3780]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:55:31.720579 systemd-networkd[1507]: lxc4f260f67882b: Gained IPv6LL Feb 12 21:55:31.942511 systemd-networkd[1507]: lxcc10c80226aca: Gained IPv6LL Feb 12 21:55:32.199387 systemd-networkd[1507]: lxc_health: Gained IPv6LL Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.623604848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.623669989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.623686630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.623922438Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2494c61d6ad9ede590f4a4e418db78d4c3bbfe769e29b7c328fb32de90ab2c3f pid=4141 runtime=io.containerd.runc.v2 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.627585599Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.627681248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.627713671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:55:36.635003 env[1721]: time="2024-02-12T21:55:36.627980082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca20790decd9c200db14fb99ccde258aed9eb60e9c3b8c964c6bc647a8f9d1ad pid=4143 runtime=io.containerd.runc.v2 Feb 12 21:55:36.712866 systemd[1]: run-containerd-runc-k8s.io-ca20790decd9c200db14fb99ccde258aed9eb60e9c3b8c964c6bc647a8f9d1ad-runc.9AaY74.mount: Deactivated successfully. Feb 12 21:55:36.784031 env[1721]: time="2024-02-12T21:55:36.783983323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r2nbb,Uid:e33627f4-231f-4b4b-a8d8-31cc5d9d2634,Namespace:kube-system,Attempt:0,} returns sandbox id \"2494c61d6ad9ede590f4a4e418db78d4c3bbfe769e29b7c328fb32de90ab2c3f\"" Feb 12 21:55:36.796296 env[1721]: time="2024-02-12T21:55:36.796220093Z" level=info msg="CreateContainer within sandbox \"2494c61d6ad9ede590f4a4e418db78d4c3bbfe769e29b7c328fb32de90ab2c3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 21:55:36.846633 env[1721]: time="2024-02-12T21:55:36.846564286Z" level=info msg="CreateContainer within sandbox \"2494c61d6ad9ede590f4a4e418db78d4c3bbfe769e29b7c328fb32de90ab2c3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e503d8ac66b120f3d49377a658013d53d342c17dab06c07aa78fddf80cdc4a9\"" Feb 12 21:55:36.847364 env[1721]: time="2024-02-12T21:55:36.847328555Z" level=info msg="StartContainer for \"9e503d8ac66b120f3d49377a658013d53d342c17dab06c07aa78fddf80cdc4a9\"" Feb 12 21:55:36.948191 env[1721]: time="2024-02-12T21:55:36.948144212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-ssbhg,Uid:0906e58e-9342-4660-9e0c-5bf0413a156a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca20790decd9c200db14fb99ccde258aed9eb60e9c3b8c964c6bc647a8f9d1ad\"" Feb 12 21:55:36.960916 env[1721]: time="2024-02-12T21:55:36.960873664Z" level=info msg="CreateContainer within sandbox \"ca20790decd9c200db14fb99ccde258aed9eb60e9c3b8c964c6bc647a8f9d1ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 21:55:36.990077 env[1721]: time="2024-02-12T21:55:36.990014196Z" level=info msg="CreateContainer within sandbox \"ca20790decd9c200db14fb99ccde258aed9eb60e9c3b8c964c6bc647a8f9d1ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac7198ad8bb16d1827d4b4d7d512d02d5ac1c81345049436a904a4528691a93d\"" Feb 12 21:55:37.001772 env[1721]: time="2024-02-12T21:55:37.001728893Z" level=info msg="StartContainer for \"ac7198ad8bb16d1827d4b4d7d512d02d5ac1c81345049436a904a4528691a93d\"" Feb 12 21:55:37.023314 env[1721]: time="2024-02-12T21:55:37.023247680Z" level=info msg="StartContainer for \"9e503d8ac66b120f3d49377a658013d53d342c17dab06c07aa78fddf80cdc4a9\" returns successfully" Feb 12 21:55:37.089653 env[1721]: time="2024-02-12T21:55:37.089601817Z" level=info msg="StartContainer for \"ac7198ad8bb16d1827d4b4d7d512d02d5ac1c81345049436a904a4528691a93d\" returns successfully" Feb 12 21:55:37.860858 kubelet[2960]: I0212 21:55:37.860829 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r2nbb" podStartSLOduration=31.860783683 pod.CreationTimestamp="2024-02-12 21:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:37.860438688 +0000 UTC m=+44.743794085" watchObservedRunningTime="2024-02-12 21:55:37.860783683 +0000 UTC m=+44.744139083" Feb 12 21:55:37.888275 kubelet[2960]: I0212 21:55:37.888227 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-ssbhg" podStartSLOduration=31.888182433 pod.CreationTimestamp="2024-02-12 21:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:55:37.874760125 +0000 UTC m=+44.758115525" watchObservedRunningTime="2024-02-12 21:55:37.888182433 +0000 UTC m=+44.771537822" Feb 12 21:55:45.211759 amazon-ssm-agent[1690]: 2024-02-12 21:55:45 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 21:55:45.310051 systemd[1]: Started sshd@5-172.31.30.174:22-139.178.89.65:35300.service. Feb 12 21:55:45.501253 sshd[4341]: Accepted publickey for core from 139.178.89.65 port 35300 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:45.504168 sshd[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:45.518459 systemd[1]: Started session-6.scope. Feb 12 21:55:45.520685 systemd-logind[1709]: New session 6 of user core. Feb 12 21:55:45.985527 sshd[4341]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:45.989543 systemd[1]: sshd@5-172.31.30.174:22-139.178.89.65:35300.service: Deactivated successfully. Feb 12 21:55:45.990963 systemd-logind[1709]: Session 6 logged out. Waiting for processes to exit. Feb 12 21:55:45.991923 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 21:55:45.994553 systemd-logind[1709]: Removed session 6. Feb 12 21:55:51.013494 systemd[1]: Started sshd@6-172.31.30.174:22-139.178.89.65:57096.service. Feb 12 21:55:51.178847 sshd[4356]: Accepted publickey for core from 139.178.89.65 port 57096 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:51.180344 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:51.188705 systemd[1]: Started session-7.scope. Feb 12 21:55:51.189048 systemd-logind[1709]: New session 7 of user core. Feb 12 21:55:51.414795 sshd[4356]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:51.418580 systemd[1]: sshd@6-172.31.30.174:22-139.178.89.65:57096.service: Deactivated successfully. Feb 12 21:55:51.420039 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 21:55:51.421093 systemd-logind[1709]: Session 7 logged out. Waiting for processes to exit. Feb 12 21:55:51.422310 systemd-logind[1709]: Removed session 7. Feb 12 21:55:56.441521 systemd[1]: Started sshd@7-172.31.30.174:22-139.178.89.65:57106.service. Feb 12 21:55:56.608415 sshd[4372]: Accepted publickey for core from 139.178.89.65 port 57106 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:55:56.612550 sshd[4372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:55:56.630817 systemd-logind[1709]: New session 8 of user core. Feb 12 21:55:56.630892 systemd[1]: Started session-8.scope. Feb 12 21:55:56.845126 sshd[4372]: pam_unix(sshd:session): session closed for user core Feb 12 21:55:56.850820 systemd[1]: sshd@7-172.31.30.174:22-139.178.89.65:57106.service: Deactivated successfully. Feb 12 21:55:56.853154 systemd-logind[1709]: Session 8 logged out. Waiting for processes to exit. Feb 12 21:55:56.853364 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 21:55:56.855907 systemd-logind[1709]: Removed session 8. Feb 12 21:56:01.869469 systemd[1]: Started sshd@8-172.31.30.174:22-139.178.89.65:51768.service. Feb 12 21:56:02.055369 sshd[4386]: Accepted publickey for core from 139.178.89.65 port 51768 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:02.058427 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:02.066572 systemd[1]: Started session-9.scope. Feb 12 21:56:02.066572 systemd-logind[1709]: New session 9 of user core. Feb 12 21:56:02.399109 sshd[4386]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:02.402523 systemd[1]: sshd@8-172.31.30.174:22-139.178.89.65:51768.service: Deactivated successfully. Feb 12 21:56:02.404384 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 21:56:02.404998 systemd-logind[1709]: Session 9 logged out. Waiting for processes to exit. Feb 12 21:56:02.406696 systemd-logind[1709]: Removed session 9. Feb 12 21:56:07.425602 systemd[1]: Started sshd@9-172.31.30.174:22-139.178.89.65:51778.service. Feb 12 21:56:07.598670 sshd[4401]: Accepted publickey for core from 139.178.89.65 port 51778 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:07.600152 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:07.607000 systemd[1]: Started session-10.scope. Feb 12 21:56:07.607308 systemd-logind[1709]: New session 10 of user core. Feb 12 21:56:07.807215 sshd[4401]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:07.811160 systemd[1]: sshd@9-172.31.30.174:22-139.178.89.65:51778.service: Deactivated successfully. Feb 12 21:56:07.812412 systemd-logind[1709]: Session 10 logged out. Waiting for processes to exit. Feb 12 21:56:07.812476 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 21:56:07.814930 systemd-logind[1709]: Removed session 10. Feb 12 21:56:12.831806 systemd[1]: Started sshd@10-172.31.30.174:22-139.178.89.65:40430.service. Feb 12 21:56:12.988306 sshd[4418]: Accepted publickey for core from 139.178.89.65 port 40430 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:12.990157 sshd[4418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:12.996029 systemd[1]: Started session-11.scope. Feb 12 21:56:12.996514 systemd-logind[1709]: New session 11 of user core. Feb 12 21:56:13.197644 sshd[4418]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:13.201151 systemd[1]: sshd@10-172.31.30.174:22-139.178.89.65:40430.service: Deactivated successfully. Feb 12 21:56:13.202587 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 21:56:13.202767 systemd-logind[1709]: Session 11 logged out. Waiting for processes to exit. Feb 12 21:56:13.204541 systemd-logind[1709]: Removed session 11. Feb 12 21:56:18.224598 systemd[1]: Started sshd@11-172.31.30.174:22-139.178.89.65:34408.service. Feb 12 21:56:18.407068 sshd[4434]: Accepted publickey for core from 139.178.89.65 port 34408 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:18.408788 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:18.414533 systemd[1]: Started session-12.scope. Feb 12 21:56:18.415538 systemd-logind[1709]: New session 12 of user core. Feb 12 21:56:18.613798 sshd[4434]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:18.618397 systemd[1]: sshd@11-172.31.30.174:22-139.178.89.65:34408.service: Deactivated successfully. Feb 12 21:56:18.619583 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 21:56:18.620082 systemd-logind[1709]: Session 12 logged out. Waiting for processes to exit. Feb 12 21:56:18.622528 systemd-logind[1709]: Removed session 12. Feb 12 21:56:23.638743 systemd[1]: Started sshd@12-172.31.30.174:22-139.178.89.65:34416.service. Feb 12 21:56:23.798924 sshd[4448]: Accepted publickey for core from 139.178.89.65 port 34416 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:23.800528 sshd[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:23.808047 systemd[1]: Started session-13.scope. Feb 12 21:56:23.808857 systemd-logind[1709]: New session 13 of user core. Feb 12 21:56:24.010762 sshd[4448]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:24.014375 systemd[1]: sshd@12-172.31.30.174:22-139.178.89.65:34416.service: Deactivated successfully. Feb 12 21:56:24.016128 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 21:56:24.020067 systemd-logind[1709]: Session 13 logged out. Waiting for processes to exit. Feb 12 21:56:24.021322 systemd-logind[1709]: Removed session 13. Feb 12 21:56:24.038314 systemd[1]: Started sshd@13-172.31.30.174:22-139.178.89.65:34428.service. Feb 12 21:56:24.218661 sshd[4462]: Accepted publickey for core from 139.178.89.65 port 34428 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:24.221068 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:24.228343 systemd-logind[1709]: New session 14 of user core. Feb 12 21:56:24.228857 systemd[1]: Started session-14.scope. Feb 12 21:56:25.845316 sshd[4462]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:25.860904 systemd[1]: sshd@13-172.31.30.174:22-139.178.89.65:34428.service: Deactivated successfully. Feb 12 21:56:25.862709 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 21:56:25.862741 systemd-logind[1709]: Session 14 logged out. Waiting for processes to exit. Feb 12 21:56:25.864696 systemd-logind[1709]: Removed session 14. Feb 12 21:56:25.872119 systemd[1]: Started sshd@14-172.31.30.174:22-139.178.89.65:34430.service. Feb 12 21:56:26.086783 sshd[4473]: Accepted publickey for core from 139.178.89.65 port 34430 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:26.088294 sshd[4473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:26.094472 systemd[1]: Started session-15.scope. Feb 12 21:56:26.094947 systemd-logind[1709]: New session 15 of user core. Feb 12 21:56:26.428443 sshd[4473]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:26.432704 systemd[1]: sshd@14-172.31.30.174:22-139.178.89.65:34430.service: Deactivated successfully. Feb 12 21:56:26.433890 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 21:56:26.434316 systemd-logind[1709]: Session 15 logged out. Waiting for processes to exit. Feb 12 21:56:26.435716 systemd-logind[1709]: Removed session 15. Feb 12 21:56:31.455301 systemd[1]: Started sshd@15-172.31.30.174:22-139.178.89.65:45554.service. Feb 12 21:56:31.628403 sshd[4489]: Accepted publickey for core from 139.178.89.65 port 45554 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:31.630376 sshd[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:31.636872 systemd[1]: Started session-16.scope. Feb 12 21:56:31.637466 systemd-logind[1709]: New session 16 of user core. Feb 12 21:56:31.846139 sshd[4489]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:31.852003 systemd[1]: sshd@15-172.31.30.174:22-139.178.89.65:45554.service: Deactivated successfully. Feb 12 21:56:31.854807 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 21:56:31.855815 systemd-logind[1709]: Session 16 logged out. Waiting for processes to exit. Feb 12 21:56:31.857629 systemd-logind[1709]: Removed session 16. Feb 12 21:56:36.874163 systemd[1]: Started sshd@16-172.31.30.174:22-139.178.89.65:45560.service. Feb 12 21:56:37.061222 sshd[4502]: Accepted publickey for core from 139.178.89.65 port 45560 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:37.063232 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:37.068972 systemd[1]: Started session-17.scope. Feb 12 21:56:37.071380 systemd-logind[1709]: New session 17 of user core. Feb 12 21:56:37.271938 sshd[4502]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:37.275844 systemd[1]: sshd@16-172.31.30.174:22-139.178.89.65:45560.service: Deactivated successfully. Feb 12 21:56:37.277602 systemd-logind[1709]: Session 17 logged out. Waiting for processes to exit. Feb 12 21:56:37.277673 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 21:56:37.280869 systemd-logind[1709]: Removed session 17. Feb 12 21:56:37.296975 systemd[1]: Started sshd@17-172.31.30.174:22-139.178.89.65:45566.service. Feb 12 21:56:37.455868 sshd[4515]: Accepted publickey for core from 139.178.89.65 port 45566 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:37.457454 sshd[4515]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:37.463508 systemd[1]: Started session-18.scope. Feb 12 21:56:37.463815 systemd-logind[1709]: New session 18 of user core. Feb 12 21:56:38.383840 sshd[4515]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:38.389736 systemd[1]: sshd@17-172.31.30.174:22-139.178.89.65:45566.service: Deactivated successfully. Feb 12 21:56:38.391015 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 21:56:38.391719 systemd-logind[1709]: Session 18 logged out. Waiting for processes to exit. Feb 12 21:56:38.392906 systemd-logind[1709]: Removed session 18. Feb 12 21:56:38.406975 systemd[1]: Started sshd@18-172.31.30.174:22-139.178.89.65:38612.service. Feb 12 21:56:38.575422 sshd[4525]: Accepted publickey for core from 139.178.89.65 port 38612 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:38.577048 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:38.584097 systemd[1]: Started session-19.scope. Feb 12 21:56:38.585076 systemd-logind[1709]: New session 19 of user core. Feb 12 21:56:40.015975 sshd[4525]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:40.019800 systemd[1]: sshd@18-172.31.30.174:22-139.178.89.65:38612.service: Deactivated successfully. Feb 12 21:56:40.021081 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 21:56:40.023910 systemd-logind[1709]: Session 19 logged out. Waiting for processes to exit. Feb 12 21:56:40.025373 systemd-logind[1709]: Removed session 19. Feb 12 21:56:40.042031 systemd[1]: Started sshd@19-172.31.30.174:22-139.178.89.65:38620.service. Feb 12 21:56:40.220077 sshd[4565]: Accepted publickey for core from 139.178.89.65 port 38620 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:40.221635 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:40.228546 systemd[1]: Started session-20.scope. Feb 12 21:56:40.228821 systemd-logind[1709]: New session 20 of user core. Feb 12 21:56:40.761610 sshd[4565]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:40.765859 systemd[1]: sshd@19-172.31.30.174:22-139.178.89.65:38620.service: Deactivated successfully. Feb 12 21:56:40.767049 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 21:56:40.767312 systemd-logind[1709]: Session 20 logged out. Waiting for processes to exit. Feb 12 21:56:40.769060 systemd-logind[1709]: Removed session 20. Feb 12 21:56:40.785223 systemd[1]: Started sshd@20-172.31.30.174:22-139.178.89.65:38632.service. Feb 12 21:56:40.943013 sshd[4604]: Accepted publickey for core from 139.178.89.65 port 38632 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:40.948751 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:40.955123 systemd[1]: Started session-21.scope. Feb 12 21:56:40.956377 systemd-logind[1709]: New session 21 of user core. Feb 12 21:56:41.156946 sshd[4604]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:41.161058 systemd-logind[1709]: Session 21 logged out. Waiting for processes to exit. Feb 12 21:56:41.161243 systemd[1]: sshd@20-172.31.30.174:22-139.178.89.65:38632.service: Deactivated successfully. Feb 12 21:56:41.162782 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 21:56:41.164326 systemd-logind[1709]: Removed session 21. Feb 12 21:56:46.182551 systemd[1]: Started sshd@21-172.31.30.174:22-139.178.89.65:38646.service. Feb 12 21:56:46.338844 sshd[4617]: Accepted publickey for core from 139.178.89.65 port 38646 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:46.340751 sshd[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:46.347328 systemd[1]: Started session-22.scope. Feb 12 21:56:46.348920 systemd-logind[1709]: New session 22 of user core. Feb 12 21:56:46.555576 sshd[4617]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:46.560712 systemd[1]: sshd@21-172.31.30.174:22-139.178.89.65:38646.service: Deactivated successfully. Feb 12 21:56:46.561353 systemd-logind[1709]: Session 22 logged out. Waiting for processes to exit. Feb 12 21:56:46.562137 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 21:56:46.564065 systemd-logind[1709]: Removed session 22. Feb 12 21:56:51.580973 systemd[1]: Started sshd@22-172.31.30.174:22-139.178.89.65:50126.service. Feb 12 21:56:51.744634 sshd[4658]: Accepted publickey for core from 139.178.89.65 port 50126 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:51.746071 sshd[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:51.751930 systemd-logind[1709]: New session 23 of user core. Feb 12 21:56:51.752343 systemd[1]: Started session-23.scope. Feb 12 21:56:51.949347 sshd[4658]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:51.953325 systemd-logind[1709]: Session 23 logged out. Waiting for processes to exit. Feb 12 21:56:51.953747 systemd[1]: sshd@22-172.31.30.174:22-139.178.89.65:50126.service: Deactivated successfully. Feb 12 21:56:51.957027 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 21:56:51.958278 systemd-logind[1709]: Removed session 23. Feb 12 21:56:56.976186 systemd[1]: Started sshd@23-172.31.30.174:22-139.178.89.65:50134.service. Feb 12 21:56:57.139531 sshd[4673]: Accepted publickey for core from 139.178.89.65 port 50134 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:56:57.141142 sshd[4673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:56:57.147226 systemd[1]: Started session-24.scope. Feb 12 21:56:57.147524 systemd-logind[1709]: New session 24 of user core. Feb 12 21:56:57.338971 sshd[4673]: pam_unix(sshd:session): session closed for user core Feb 12 21:56:57.347864 systemd[1]: sshd@23-172.31.30.174:22-139.178.89.65:50134.service: Deactivated successfully. Feb 12 21:56:57.352689 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 21:56:57.354105 systemd-logind[1709]: Session 24 logged out. Waiting for processes to exit. Feb 12 21:56:57.355867 systemd-logind[1709]: Removed session 24. Feb 12 21:57:02.364675 systemd[1]: Started sshd@24-172.31.30.174:22-139.178.89.65:54114.service. Feb 12 21:57:02.534313 sshd[4686]: Accepted publickey for core from 139.178.89.65 port 54114 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:57:02.536232 sshd[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:57:02.544194 systemd[1]: Started session-25.scope. Feb 12 21:57:02.546593 systemd-logind[1709]: New session 25 of user core. Feb 12 21:57:02.794942 sshd[4686]: pam_unix(sshd:session): session closed for user core Feb 12 21:57:02.805537 systemd[1]: sshd@24-172.31.30.174:22-139.178.89.65:54114.service: Deactivated successfully. Feb 12 21:57:02.806858 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 21:57:02.807645 systemd-logind[1709]: Session 25 logged out. Waiting for processes to exit. Feb 12 21:57:02.826774 systemd-logind[1709]: Removed session 25. Feb 12 21:57:02.833745 systemd[1]: Started sshd@25-172.31.30.174:22-139.178.89.65:54118.service. Feb 12 21:57:03.019824 sshd[4699]: Accepted publickey for core from 139.178.89.65 port 54118 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:57:03.021333 sshd[4699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:57:03.030950 systemd[1]: Started session-26.scope. Feb 12 21:57:03.031690 systemd-logind[1709]: New session 26 of user core. Feb 12 21:57:05.013192 env[1721]: time="2024-02-12T21:57:05.005046205Z" level=info msg="StopContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" with timeout 30 (s)" Feb 12 21:57:05.013192 env[1721]: time="2024-02-12T21:57:05.005906078Z" level=info msg="Stop container \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" with signal terminated" Feb 12 21:57:05.010923 systemd[1]: run-containerd-runc-k8s.io-56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3-runc.0vu4lA.mount: Deactivated successfully. Feb 12 21:57:05.058757 env[1721]: time="2024-02-12T21:57:05.058362133Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 21:57:05.070711 env[1721]: time="2024-02-12T21:57:05.070630136Z" level=info msg="StopContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" with timeout 1 (s)" Feb 12 21:57:05.071123 env[1721]: time="2024-02-12T21:57:05.071094024Z" level=info msg="Stop container \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" with signal terminated" Feb 12 21:57:05.090854 systemd-networkd[1507]: lxc_health: Link DOWN Feb 12 21:57:05.090863 systemd-networkd[1507]: lxc_health: Lost carrier Feb 12 21:57:05.138161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.263078 env[1721]: time="2024-02-12T21:57:05.262998888Z" level=info msg="shim disconnected" id=013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5 Feb 12 21:57:05.263408 env[1721]: time="2024-02-12T21:57:05.263083363Z" level=warning msg="cleaning up after shim disconnected" id=013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5 namespace=k8s.io Feb 12 21:57:05.263408 env[1721]: time="2024-02-12T21:57:05.263138645Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.284110 env[1721]: time="2024-02-12T21:57:05.284054315Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4760 runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.291050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.292375 env[1721]: time="2024-02-12T21:57:05.292232013Z" level=info msg="StopContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" returns successfully" Feb 12 21:57:05.294424 env[1721]: time="2024-02-12T21:57:05.294392502Z" level=info msg="StopPodSandbox for \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\"" Feb 12 21:57:05.294631 env[1721]: time="2024-02-12T21:57:05.294602114Z" level=info msg="Container to stop \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.300754 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f-shm.mount: Deactivated successfully. Feb 12 21:57:05.313017 env[1721]: time="2024-02-12T21:57:05.312904362Z" level=info msg="shim disconnected" id=56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3 Feb 12 21:57:05.313259 env[1721]: time="2024-02-12T21:57:05.313027537Z" level=warning msg="cleaning up after shim disconnected" id=56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3 namespace=k8s.io Feb 12 21:57:05.313259 env[1721]: time="2024-02-12T21:57:05.313044084Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.332941 env[1721]: time="2024-02-12T21:57:05.332890503Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4793 runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.338249 env[1721]: time="2024-02-12T21:57:05.338201818Z" level=info msg="StopContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" returns successfully" Feb 12 21:57:05.339110 env[1721]: time="2024-02-12T21:57:05.339072921Z" level=info msg="StopPodSandbox for \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\"" Feb 12 21:57:05.339564 env[1721]: time="2024-02-12T21:57:05.339533658Z" level=info msg="Container to stop \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.339719 env[1721]: time="2024-02-12T21:57:05.339694500Z" level=info msg="Container to stop \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.339839 env[1721]: time="2024-02-12T21:57:05.339816232Z" level=info msg="Container to stop \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.339950 env[1721]: time="2024-02-12T21:57:05.339928453Z" level=info msg="Container to stop \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.340059 env[1721]: time="2024-02-12T21:57:05.340038763Z" level=info msg="Container to stop \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:05.356565 env[1721]: time="2024-02-12T21:57:05.356508928Z" level=info msg="shim disconnected" id=82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f Feb 12 21:57:05.356921 env[1721]: time="2024-02-12T21:57:05.356895406Z" level=warning msg="cleaning up after shim disconnected" id=82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f namespace=k8s.io Feb 12 21:57:05.357064 env[1721]: time="2024-02-12T21:57:05.357021824Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.390182 env[1721]: time="2024-02-12T21:57:05.390127194Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4830 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T21:57:05Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.394662 env[1721]: time="2024-02-12T21:57:05.394613104Z" level=info msg="TearDown network for sandbox \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\" successfully" Feb 12 21:57:05.394662 env[1721]: time="2024-02-12T21:57:05.394657685Z" level=info msg="StopPodSandbox for \"82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f\" returns successfully" Feb 12 21:57:05.414116 env[1721]: time="2024-02-12T21:57:05.395343578Z" level=info msg="shim disconnected" id=2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b Feb 12 21:57:05.414116 env[1721]: time="2024-02-12T21:57:05.395394151Z" level=warning msg="cleaning up after shim disconnected" id=2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b namespace=k8s.io Feb 12 21:57:05.414116 env[1721]: time="2024-02-12T21:57:05.395408692Z" level=info msg="cleaning up dead shim" Feb 12 21:57:05.421609 env[1721]: time="2024-02-12T21:57:05.419878230Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4854 runtime=io.containerd.runc.v2\n" Feb 12 21:57:05.421609 env[1721]: time="2024-02-12T21:57:05.421102862Z" level=info msg="TearDown network for sandbox \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" successfully" Feb 12 21:57:05.421609 env[1721]: time="2024-02-12T21:57:05.421133136Z" level=info msg="StopPodSandbox for \"2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b\" returns successfully" Feb 12 21:57:05.522722 kubelet[2960]: I0212 21:57:05.522597 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f628668-03a9-4cc6-8e97-285241c99d8e-clustermesh-secrets\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.522722 kubelet[2960]: I0212 21:57:05.522653 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-xtables-lock\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.522722 kubelet[2960]: I0212 21:57:05.522684 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-cgroup\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.534112 kubelet[2960]: I0212 21:57:05.534060 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.534409 kubelet[2960]: I0212 21:57:05.534394 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-lib-modules\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.534551 kubelet[2960]: I0212 21:57:05.534541 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtzn7\" (UniqueName: \"kubernetes.io/projected/103104ca-5420-4915-8ff6-15f792c97e6c-kube-api-access-jtzn7\") pod \"103104ca-5420-4915-8ff6-15f792c97e6c\" (UID: \"103104ca-5420-4915-8ff6-15f792c97e6c\") " Feb 12 21:57:05.534656 kubelet[2960]: I0212 21:57:05.534647 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-hubble-tls\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.534755 kubelet[2960]: I0212 21:57:05.534746 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/103104ca-5420-4915-8ff6-15f792c97e6c-cilium-config-path\") pod \"103104ca-5420-4915-8ff6-15f792c97e6c\" (UID: \"103104ca-5420-4915-8ff6-15f792c97e6c\") " Feb 12 21:57:05.534862 kubelet[2960]: I0212 21:57:05.534853 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-net\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.534949 kubelet[2960]: I0212 21:57:05.534941 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cni-path\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535043 kubelet[2960]: I0212 21:57:05.535034 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-config-path\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535130 kubelet[2960]: I0212 21:57:05.535122 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-bpf-maps\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535214 kubelet[2960]: I0212 21:57:05.535207 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-etc-cni-netd\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535334 kubelet[2960]: I0212 21:57:05.535325 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-kernel\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535428 kubelet[2960]: I0212 21:57:05.535421 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-hostproc\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535506 kubelet[2960]: I0212 21:57:05.535499 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-run\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535594 kubelet[2960]: I0212 21:57:05.535587 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlsx5\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5\") pod \"2f628668-03a9-4cc6-8e97-285241c99d8e\" (UID: \"2f628668-03a9-4cc6-8e97-285241c99d8e\") " Feb 12 21:57:05.535728 kubelet[2960]: I0212 21:57:05.535697 2960 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-xtables-lock\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.536546 kubelet[2960]: I0212 21:57:05.536515 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f628668-03a9-4cc6-8e97-285241c99d8e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:05.536702 kubelet[2960]: I0212 21:57:05.536674 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cni-path" (OuterVolumeSpecName: "cni-path") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.536773 kubelet[2960]: I0212 21:57:05.532529 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.536773 kubelet[2960]: I0212 21:57:05.536724 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.543123 kubelet[2960]: I0212 21:57:05.543080 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5" (OuterVolumeSpecName: "kube-api-access-xlsx5") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "kube-api-access-xlsx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:05.543620 kubelet[2960]: I0212 21:57:05.543589 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103104ca-5420-4915-8ff6-15f792c97e6c-kube-api-access-jtzn7" (OuterVolumeSpecName: "kube-api-access-jtzn7") pod "103104ca-5420-4915-8ff6-15f792c97e6c" (UID: "103104ca-5420-4915-8ff6-15f792c97e6c"). InnerVolumeSpecName "kube-api-access-jtzn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:05.547829 kubelet[2960]: W0212 21:57:05.547771 2960 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/2f628668-03a9-4cc6-8e97-285241c99d8e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:57:05.549310 kubelet[2960]: I0212 21:57:05.549244 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:05.550130 kubelet[2960]: W0212 21:57:05.550093 2960 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/103104ca-5420-4915-8ff6-15f792c97e6c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:57:05.550922 kubelet[2960]: I0212 21:57:05.550818 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:57:05.551222 kubelet[2960]: I0212 21:57:05.550959 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.551222 kubelet[2960]: I0212 21:57:05.550988 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.551222 kubelet[2960]: I0212 21:57:05.551013 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.551222 kubelet[2960]: I0212 21:57:05.551064 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-hostproc" (OuterVolumeSpecName: "hostproc") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.551222 kubelet[2960]: I0212 21:57:05.551091 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.551549 kubelet[2960]: I0212 21:57:05.551135 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2f628668-03a9-4cc6-8e97-285241c99d8e" (UID: "2f628668-03a9-4cc6-8e97-285241c99d8e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:05.553872 kubelet[2960]: I0212 21:57:05.553842 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/103104ca-5420-4915-8ff6-15f792c97e6c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "103104ca-5420-4915-8ff6-15f792c97e6c" (UID: "103104ca-5420-4915-8ff6-15f792c97e6c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:57:05.636640 kubelet[2960]: I0212 21:57:05.636206 2960 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-lib-modules\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636646 2960 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f628668-03a9-4cc6-8e97-285241c99d8e-clustermesh-secrets\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636675 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-cgroup\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636691 2960 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-jtzn7\" (UniqueName: \"kubernetes.io/projected/103104ca-5420-4915-8ff6-15f792c97e6c-kube-api-access-jtzn7\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636704 2960 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-hubble-tls\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636718 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/103104ca-5420-4915-8ff6-15f792c97e6c-cilium-config-path\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636732 2960 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-net\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636744 2960 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cni-path\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.636822 kubelet[2960]: I0212 21:57:05.636757 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-config-path\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636770 2960 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-hostproc\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636784 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-cilium-run\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636796 2960 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-bpf-maps\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636809 2960 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-etc-cni-netd\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636822 2960 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f628668-03a9-4cc6-8e97-285241c99d8e-host-proc-sys-kernel\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.637740 kubelet[2960]: I0212 21:57:05.636852 2960 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-xlsx5\" (UniqueName: \"kubernetes.io/projected/2f628668-03a9-4cc6-8e97-285241c99d8e-kube-api-access-xlsx5\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:05.984889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.987101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ecb878362ac26f51d5e793dc210f725cd88f0008f28e84a057c37fc71c2065b-shm.mount: Deactivated successfully. Feb 12 21:57:05.987296 systemd[1]: var-lib-kubelet-pods-2f628668\x2d03a9\x2d4cc6\x2d8e97\x2d285241c99d8e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxlsx5.mount: Deactivated successfully. Feb 12 21:57:05.989679 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82ecc84a6480240c7c6afd0fbac403cd8795cc790d9480af84e7a204b69fdc5f-rootfs.mount: Deactivated successfully. Feb 12 21:57:05.990040 systemd[1]: var-lib-kubelet-pods-103104ca\x2d5420\x2d4915\x2d8ff6\x2d15f792c97e6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djtzn7.mount: Deactivated successfully. Feb 12 21:57:05.990620 systemd[1]: var-lib-kubelet-pods-2f628668\x2d03a9\x2d4cc6\x2d8e97\x2d285241c99d8e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:57:05.991191 systemd[1]: var-lib-kubelet-pods-2f628668\x2d03a9\x2d4cc6\x2d8e97\x2d285241c99d8e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:06.074373 kubelet[2960]: I0212 21:57:06.074345 2960 scope.go:115] "RemoveContainer" containerID="56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3" Feb 12 21:57:06.082605 env[1721]: time="2024-02-12T21:57:06.082148665Z" level=info msg="RemoveContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\"" Feb 12 21:57:06.101329 env[1721]: time="2024-02-12T21:57:06.098097115Z" level=info msg="RemoveContainer for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" returns successfully" Feb 12 21:57:06.102057 kubelet[2960]: I0212 21:57:06.102030 2960 scope.go:115] "RemoveContainer" containerID="8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63" Feb 12 21:57:06.105102 env[1721]: time="2024-02-12T21:57:06.103898755Z" level=info msg="RemoveContainer for \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\"" Feb 12 21:57:06.111303 env[1721]: time="2024-02-12T21:57:06.111128454Z" level=info msg="RemoveContainer for \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\" returns successfully" Feb 12 21:57:06.111684 kubelet[2960]: I0212 21:57:06.111663 2960 scope.go:115] "RemoveContainer" containerID="dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833" Feb 12 21:57:06.113405 env[1721]: time="2024-02-12T21:57:06.113365310Z" level=info msg="RemoveContainer for \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\"" Feb 12 21:57:06.126037 env[1721]: time="2024-02-12T21:57:06.125987957Z" level=info msg="RemoveContainer for \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\" returns successfully" Feb 12 21:57:06.126927 kubelet[2960]: I0212 21:57:06.126883 2960 scope.go:115] "RemoveContainer" containerID="d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69" Feb 12 21:57:06.134744 env[1721]: time="2024-02-12T21:57:06.134560964Z" level=info msg="RemoveContainer for \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\"" Feb 12 21:57:06.140931 env[1721]: time="2024-02-12T21:57:06.140886043Z" level=info msg="RemoveContainer for \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\" returns successfully" Feb 12 21:57:06.141384 kubelet[2960]: I0212 21:57:06.141364 2960 scope.go:115] "RemoveContainer" containerID="49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c" Feb 12 21:57:06.143385 env[1721]: time="2024-02-12T21:57:06.143322988Z" level=info msg="RemoveContainer for \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\"" Feb 12 21:57:06.158995 env[1721]: time="2024-02-12T21:57:06.158860980Z" level=info msg="RemoveContainer for \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\" returns successfully" Feb 12 21:57:06.160802 kubelet[2960]: I0212 21:57:06.159574 2960 scope.go:115] "RemoveContainer" containerID="56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3" Feb 12 21:57:06.160934 env[1721]: time="2024-02-12T21:57:06.160181676Z" level=error msg="ContainerStatus for \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\": not found" Feb 12 21:57:06.168510 kubelet[2960]: E0212 21:57:06.168272 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\": not found" containerID="56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3" Feb 12 21:57:06.169836 kubelet[2960]: I0212 21:57:06.169521 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3} err="failed to get container status \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"56a3723dc17f8d1c5b465cca550206cc8bc618f5a10f8299dfd5cfcf199a73d3\": not found" Feb 12 21:57:06.170031 kubelet[2960]: I0212 21:57:06.169756 2960 scope.go:115] "RemoveContainer" containerID="8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63" Feb 12 21:57:06.173099 env[1721]: time="2024-02-12T21:57:06.173006777Z" level=error msg="ContainerStatus for \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\": not found" Feb 12 21:57:06.173299 kubelet[2960]: E0212 21:57:06.173282 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\": not found" containerID="8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63" Feb 12 21:57:06.173393 kubelet[2960]: I0212 21:57:06.173324 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63} err="failed to get container status \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d6dd3cc035d221af8aa5135e5f76c7d2638cf5becba714687a30181024edf63\": not found" Feb 12 21:57:06.173393 kubelet[2960]: I0212 21:57:06.173345 2960 scope.go:115] "RemoveContainer" containerID="dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833" Feb 12 21:57:06.174620 env[1721]: time="2024-02-12T21:57:06.174508594Z" level=error msg="ContainerStatus for \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\": not found" Feb 12 21:57:06.174883 kubelet[2960]: E0212 21:57:06.174865 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\": not found" containerID="dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833" Feb 12 21:57:06.175086 kubelet[2960]: I0212 21:57:06.175070 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833} err="failed to get container status \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcb586ec6fd837cc5205cda66584ef7256e09f9708957fce3f5b7e7a4bd9c833\": not found" Feb 12 21:57:06.175301 kubelet[2960]: I0212 21:57:06.175091 2960 scope.go:115] "RemoveContainer" containerID="d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69" Feb 12 21:57:06.175710 env[1721]: time="2024-02-12T21:57:06.175630503Z" level=error msg="ContainerStatus for \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\": not found" Feb 12 21:57:06.175863 kubelet[2960]: E0212 21:57:06.175843 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\": not found" containerID="d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69" Feb 12 21:57:06.176001 kubelet[2960]: I0212 21:57:06.175880 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69} err="failed to get container status \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\": rpc error: code = NotFound desc = an error occurred when try to find container \"d182657ec9a7a12799d0b8a6c2ef9493ac22b90cb98d15edb0484151bbca2a69\": not found" Feb 12 21:57:06.176001 kubelet[2960]: I0212 21:57:06.175948 2960 scope.go:115] "RemoveContainer" containerID="49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c" Feb 12 21:57:06.176183 env[1721]: time="2024-02-12T21:57:06.176126931Z" level=error msg="ContainerStatus for \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\": not found" Feb 12 21:57:06.176322 kubelet[2960]: E0212 21:57:06.176302 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\": not found" containerID="49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c" Feb 12 21:57:06.176516 kubelet[2960]: I0212 21:57:06.176337 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c} err="failed to get container status \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"49d45a534bd525cad242ad6b532585c50cb0433a88de30b710b6acec3b50da5c\": not found" Feb 12 21:57:06.176516 kubelet[2960]: I0212 21:57:06.176355 2960 scope.go:115] "RemoveContainer" containerID="013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5" Feb 12 21:57:06.178077 env[1721]: time="2024-02-12T21:57:06.177523551Z" level=info msg="RemoveContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\"" Feb 12 21:57:06.183430 env[1721]: time="2024-02-12T21:57:06.183389176Z" level=info msg="RemoveContainer for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" returns successfully" Feb 12 21:57:06.183741 kubelet[2960]: I0212 21:57:06.183716 2960 scope.go:115] "RemoveContainer" containerID="013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5" Feb 12 21:57:06.184068 env[1721]: time="2024-02-12T21:57:06.184007242Z" level=error msg="ContainerStatus for \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\": not found" Feb 12 21:57:06.184235 kubelet[2960]: E0212 21:57:06.184204 2960 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\": not found" containerID="013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5" Feb 12 21:57:06.184317 kubelet[2960]: I0212 21:57:06.184245 2960 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5} err="failed to get container status \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"013acb562d4735d52371ffc12d5fc97f0728abec7af449911584eeb71548eaa5\": not found" Feb 12 21:57:06.891395 sshd[4699]: pam_unix(sshd:session): session closed for user core Feb 12 21:57:06.894976 systemd[1]: sshd@25-172.31.30.174:22-139.178.89.65:54118.service: Deactivated successfully. Feb 12 21:57:06.897694 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 21:57:06.898794 systemd-logind[1709]: Session 26 logged out. Waiting for processes to exit. Feb 12 21:57:06.902683 systemd-logind[1709]: Removed session 26. Feb 12 21:57:06.916309 systemd[1]: Started sshd@26-172.31.30.174:22-139.178.89.65:54128.service. Feb 12 21:57:07.093888 sshd[4876]: Accepted publickey for core from 139.178.89.65 port 54128 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:57:07.095604 sshd[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:57:07.101945 systemd-logind[1709]: New session 27 of user core. Feb 12 21:57:07.102979 systemd[1]: Started session-27.scope. Feb 12 21:57:07.590876 kubelet[2960]: I0212 21:57:07.590849 2960 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=103104ca-5420-4915-8ff6-15f792c97e6c path="/var/lib/kubelet/pods/103104ca-5420-4915-8ff6-15f792c97e6c/volumes" Feb 12 21:57:07.594498 kubelet[2960]: I0212 21:57:07.594472 2960 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2f628668-03a9-4cc6-8e97-285241c99d8e path="/var/lib/kubelet/pods/2f628668-03a9-4cc6-8e97-285241c99d8e/volumes" Feb 12 21:57:08.053109 kubelet[2960]: I0212 21:57:08.053060 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:57:08.056541 kubelet[2960]: E0212 21:57:08.056511 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="103104ca-5420-4915-8ff6-15f792c97e6c" containerName="cilium-operator" Feb 12 21:57:08.059031 kubelet[2960]: E0212 21:57:08.058976 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="clean-cilium-state" Feb 12 21:57:08.059198 kubelet[2960]: E0212 21:57:08.059185 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="cilium-agent" Feb 12 21:57:08.059307 kubelet[2960]: E0212 21:57:08.059297 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="mount-cgroup" Feb 12 21:57:08.064897 kubelet[2960]: E0212 21:57:08.059387 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="apply-sysctl-overwrites" Feb 12 21:57:08.064897 kubelet[2960]: E0212 21:57:08.059400 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="mount-bpf-fs" Feb 12 21:57:08.064036 sshd[4876]: pam_unix(sshd:session): session closed for user core Feb 12 21:57:08.067841 systemd[1]: sshd@26-172.31.30.174:22-139.178.89.65:54128.service: Deactivated successfully. Feb 12 21:57:08.069162 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 21:57:08.072979 systemd-logind[1709]: Session 27 logged out. Waiting for processes to exit. Feb 12 21:57:08.077059 systemd-logind[1709]: Removed session 27. Feb 12 21:57:08.087602 systemd[1]: Started sshd@27-172.31.30.174:22-139.178.89.65:40870.service. Feb 12 21:57:08.118079 kubelet[2960]: I0212 21:57:08.118046 2960 memory_manager.go:346] "RemoveStaleState removing state" podUID="103104ca-5420-4915-8ff6-15f792c97e6c" containerName="cilium-operator" Feb 12 21:57:08.118312 kubelet[2960]: I0212 21:57:08.118294 2960 memory_manager.go:346] "RemoveStaleState removing state" podUID="2f628668-03a9-4cc6-8e97-285241c99d8e" containerName="cilium-agent" Feb 12 21:57:08.254164 kubelet[2960]: I0212 21:57:08.254123 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-etc-cni-netd\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254387 kubelet[2960]: I0212 21:57:08.254233 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-ipsec-secrets\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254387 kubelet[2960]: I0212 21:57:08.254310 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-cgroup\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254387 kubelet[2960]: I0212 21:57:08.254338 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-lib-modules\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254387 kubelet[2960]: I0212 21:57:08.254366 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-xtables-lock\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254399 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-config-path\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254436 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-bpf-maps\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254467 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-net\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254499 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-run\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254532 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f99t\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-kube-api-access-9f99t\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254583 kubelet[2960]: I0212 21:57:08.254568 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cni-path\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254836 kubelet[2960]: I0212 21:57:08.254599 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-clustermesh-secrets\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254836 kubelet[2960]: I0212 21:57:08.254632 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-kernel\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254836 kubelet[2960]: I0212 21:57:08.254665 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-hostproc\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.254836 kubelet[2960]: I0212 21:57:08.254697 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-hubble-tls\") pod \"cilium-rg698\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " pod="kube-system/cilium-rg698" Feb 12 21:57:08.291641 sshd[4889]: Accepted publickey for core from 139.178.89.65 port 40870 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:57:08.293574 sshd[4889]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:57:08.301462 systemd[1]: Started session-28.scope. Feb 12 21:57:08.302334 systemd-logind[1709]: New session 28 of user core. Feb 12 21:57:08.439673 env[1721]: time="2024-02-12T21:57:08.439175056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rg698,Uid:7406c106-7700-40d1-a990-ebd98af6e3ad,Namespace:kube-system,Attempt:0,}" Feb 12 21:57:08.476651 env[1721]: time="2024-02-12T21:57:08.476577998Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:57:08.476903 env[1721]: time="2024-02-12T21:57:08.476879592Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:57:08.477399 env[1721]: time="2024-02-12T21:57:08.477332112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:57:08.478097 env[1721]: time="2024-02-12T21:57:08.478019443Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85 pid=4911 runtime=io.containerd.runc.v2 Feb 12 21:57:08.584862 env[1721]: time="2024-02-12T21:57:08.584809923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rg698,Uid:7406c106-7700-40d1-a990-ebd98af6e3ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\"" Feb 12 21:57:08.592008 env[1721]: time="2024-02-12T21:57:08.591316880Z" level=info msg="CreateContainer within sandbox \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:57:08.616480 env[1721]: time="2024-02-12T21:57:08.616426183Z" level=info msg="CreateContainer within sandbox \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\"" Feb 12 21:57:08.617921 env[1721]: time="2024-02-12T21:57:08.617867320Z" level=info msg="StartContainer for \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\"" Feb 12 21:57:08.720488 env[1721]: time="2024-02-12T21:57:08.716999756Z" level=info msg="StartContainer for \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\" returns successfully" Feb 12 21:57:08.714377 systemd[1]: sshd@27-172.31.30.174:22-139.178.89.65:40870.service: Deactivated successfully. Feb 12 21:57:08.710129 sshd[4889]: pam_unix(sshd:session): session closed for user core Feb 12 21:57:08.716066 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 21:57:08.723682 systemd-logind[1709]: Session 28 logged out. Waiting for processes to exit. Feb 12 21:57:08.728739 systemd-logind[1709]: Removed session 28. Feb 12 21:57:08.734854 systemd[1]: Started sshd@28-172.31.30.174:22-139.178.89.65:40884.service. Feb 12 21:57:08.757059 kubelet[2960]: E0212 21:57:08.756837 2960 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:57:08.819788 env[1721]: time="2024-02-12T21:57:08.819738396Z" level=info msg="shim disconnected" id=5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d Feb 12 21:57:08.819788 env[1721]: time="2024-02-12T21:57:08.819786907Z" level=warning msg="cleaning up after shim disconnected" id=5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d namespace=k8s.io Feb 12 21:57:08.820254 env[1721]: time="2024-02-12T21:57:08.819799207Z" level=info msg="cleaning up dead shim" Feb 12 21:57:08.829087 env[1721]: time="2024-02-12T21:57:08.829036448Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5001 runtime=io.containerd.runc.v2\n" Feb 12 21:57:08.921149 sshd[4980]: Accepted publickey for core from 139.178.89.65 port 40884 ssh2: RSA SHA256:BLc8w5wGiofCozMWb4UlfDNGWSz58WJcVew2e99GstU Feb 12 21:57:08.922726 sshd[4980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 21:57:08.929154 systemd[1]: Started session-29.scope. Feb 12 21:57:08.929460 systemd-logind[1709]: New session 29 of user core. Feb 12 21:57:09.135958 env[1721]: time="2024-02-12T21:57:09.135846744Z" level=info msg="StopPodSandbox for \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\"" Feb 12 21:57:09.136200 env[1721]: time="2024-02-12T21:57:09.136172251Z" level=info msg="Container to stop \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 21:57:09.193655 env[1721]: time="2024-02-12T21:57:09.193599408Z" level=info msg="shim disconnected" id=2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85 Feb 12 21:57:09.193655 env[1721]: time="2024-02-12T21:57:09.193646207Z" level=warning msg="cleaning up after shim disconnected" id=2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85 namespace=k8s.io Feb 12 21:57:09.193655 env[1721]: time="2024-02-12T21:57:09.193659316Z" level=info msg="cleaning up dead shim" Feb 12 21:57:09.203823 env[1721]: time="2024-02-12T21:57:09.203767712Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5042 runtime=io.containerd.runc.v2\n" Feb 12 21:57:09.204158 env[1721]: time="2024-02-12T21:57:09.204122699Z" level=info msg="TearDown network for sandbox \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\" successfully" Feb 12 21:57:09.204246 env[1721]: time="2024-02-12T21:57:09.204156505Z" level=info msg="StopPodSandbox for \"2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85\" returns successfully" Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268495 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-xtables-lock\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268547 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cni-path\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268575 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268587 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-config-path\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268650 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-run\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270298 kubelet[2960]: I0212 21:57:09.268700 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-etc-cni-netd\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270717 kubelet[2960]: I0212 21:57:09.268732 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-ipsec-secrets\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270717 kubelet[2960]: I0212 21:57:09.268776 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-hostproc\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.270717 kubelet[2960]: W0212 21:57:09.268781 2960 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7406c106-7700-40d1-a990-ebd98af6e3ad/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 21:57:09.270717 kubelet[2960]: I0212 21:57:09.269397 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.270717 kubelet[2960]: I0212 21:57:09.269447 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.270717 kubelet[2960]: I0212 21:57:09.268806 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-hubble-tls\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270420 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-bpf-maps\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270467 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-net\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270505 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f99t\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-kube-api-access-9f99t\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270539 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-clustermesh-secrets\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270570 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-kernel\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271111 kubelet[2960]: I0212 21:57:09.270600 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-cgroup\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270631 2960 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-lib-modules\") pod \"7406c106-7700-40d1-a990-ebd98af6e3ad\" (UID: \"7406c106-7700-40d1-a990-ebd98af6e3ad\") " Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270690 2960 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-xtables-lock\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270709 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-run\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270724 2960 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-etc-cni-netd\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270752 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271444 kubelet[2960]: I0212 21:57:09.270782 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271740 kubelet[2960]: I0212 21:57:09.270806 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271740 kubelet[2960]: I0212 21:57:09.270828 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271740 kubelet[2960]: I0212 21:57:09.271381 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271740 kubelet[2960]: I0212 21:57:09.271417 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.271740 kubelet[2960]: I0212 21:57:09.271569 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 21:57:09.273477 kubelet[2960]: I0212 21:57:09.273451 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 21:57:09.275444 kubelet[2960]: I0212 21:57:09.275300 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:09.277509 kubelet[2960]: I0212 21:57:09.277478 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:09.280197 kubelet[2960]: I0212 21:57:09.280169 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-kube-api-access-9f99t" (OuterVolumeSpecName: "kube-api-access-9f99t") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "kube-api-access-9f99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 21:57:09.280907 kubelet[2960]: I0212 21:57:09.280872 2960 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7406c106-7700-40d1-a990-ebd98af6e3ad" (UID: "7406c106-7700-40d1-a990-ebd98af6e3ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 21:57:09.367056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2153c066e4ad53c940e3c83321147d846ac4fe860bc3c1f3483f1693dc25df85-shm.mount: Deactivated successfully. Feb 12 21:57:09.367289 systemd[1]: var-lib-kubelet-pods-7406c106\x2d7700\x2d40d1\x2da990\x2debd98af6e3ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9f99t.mount: Deactivated successfully. Feb 12 21:57:09.367491 systemd[1]: var-lib-kubelet-pods-7406c106\x2d7700\x2d40d1\x2da990\x2debd98af6e3ad-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:09.367633 systemd[1]: var-lib-kubelet-pods-7406c106\x2d7700\x2d40d1\x2da990\x2debd98af6e3ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 21:57:09.367772 systemd[1]: var-lib-kubelet-pods-7406c106\x2d7700\x2d40d1\x2da990\x2debd98af6e3ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371043 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-ipsec-secrets\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371080 2960 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-hostproc\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371095 2960 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-bpf-maps\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371111 2960 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-net\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371129 2960 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-hubble-tls\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371146 2960 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-9f99t\" (UniqueName: \"kubernetes.io/projected/7406c106-7700-40d1-a990-ebd98af6e3ad-kube-api-access-9f99t\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371166 2960 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7406c106-7700-40d1-a990-ebd98af6e3ad-clustermesh-secrets\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.371548 kubelet[2960]: I0212 21:57:09.371186 2960 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-host-proc-sys-kernel\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.372105 kubelet[2960]: I0212 21:57:09.371204 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-cgroup\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.372105 kubelet[2960]: I0212 21:57:09.371225 2960 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-lib-modules\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.372105 kubelet[2960]: I0212 21:57:09.371241 2960 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7406c106-7700-40d1-a990-ebd98af6e3ad-cni-path\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:09.372430 kubelet[2960]: I0212 21:57:09.372402 2960 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7406c106-7700-40d1-a990-ebd98af6e3ad-cilium-config-path\") on node \"ip-172-31-30-174\" DevicePath \"\"" Feb 12 21:57:10.139435 kubelet[2960]: I0212 21:57:10.139409 2960 scope.go:115] "RemoveContainer" containerID="5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d" Feb 12 21:57:10.146729 env[1721]: time="2024-02-12T21:57:10.146688103Z" level=info msg="RemoveContainer for \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\"" Feb 12 21:57:10.173563 env[1721]: time="2024-02-12T21:57:10.173502807Z" level=info msg="RemoveContainer for \"5f8456998160ab314f90729753d3b2a8712dc73c781015615c9fed0ede89df8d\" returns successfully" Feb 12 21:57:10.219534 kubelet[2960]: I0212 21:57:10.219495 2960 topology_manager.go:210] "Topology Admit Handler" Feb 12 21:57:10.219808 kubelet[2960]: E0212 21:57:10.219566 2960 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7406c106-7700-40d1-a990-ebd98af6e3ad" containerName="mount-cgroup" Feb 12 21:57:10.219808 kubelet[2960]: I0212 21:57:10.219600 2960 memory_manager.go:346] "RemoveStaleState removing state" podUID="7406c106-7700-40d1-a990-ebd98af6e3ad" containerName="mount-cgroup" Feb 12 21:57:10.277452 kubelet[2960]: I0212 21:57:10.277398 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-bpf-maps\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277452 kubelet[2960]: I0212 21:57:10.277458 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-xtables-lock\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277488 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-host-proc-sys-kernel\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277517 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1dcc1944-d311-4675-847c-b5f3f2ece986-cilium-ipsec-secrets\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277547 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1dcc1944-d311-4675-847c-b5f3f2ece986-hubble-tls\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277573 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-hostproc\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277605 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1dcc1944-d311-4675-847c-b5f3f2ece986-cilium-config-path\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.277694 kubelet[2960]: I0212 21:57:10.277647 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-host-proc-sys-net\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277679 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-cilium-run\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277710 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-cilium-cgroup\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277740 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-lib-modules\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277773 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1dcc1944-d311-4675-847c-b5f3f2ece986-clustermesh-secrets\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277804 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-cni-path\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278119 kubelet[2960]: I0212 21:57:10.277836 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dcc1944-d311-4675-847c-b5f3f2ece986-etc-cni-netd\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.278349 kubelet[2960]: I0212 21:57:10.277866 2960 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wnfj\" (UniqueName: \"kubernetes.io/projected/1dcc1944-d311-4675-847c-b5f3f2ece986-kube-api-access-6wnfj\") pod \"cilium-j8x5m\" (UID: \"1dcc1944-d311-4675-847c-b5f3f2ece986\") " pod="kube-system/cilium-j8x5m" Feb 12 21:57:10.525017 env[1721]: time="2024-02-12T21:57:10.524967516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8x5m,Uid:1dcc1944-d311-4675-847c-b5f3f2ece986,Namespace:kube-system,Attempt:0,}" Feb 12 21:57:10.568801 env[1721]: time="2024-02-12T21:57:10.568631187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 21:57:10.568801 env[1721]: time="2024-02-12T21:57:10.568725127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 21:57:10.568801 env[1721]: time="2024-02-12T21:57:10.568742593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 21:57:10.569514 env[1721]: time="2024-02-12T21:57:10.569458153Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981 pid=5070 runtime=io.containerd.runc.v2 Feb 12 21:57:10.638547 env[1721]: time="2024-02-12T21:57:10.637380241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j8x5m,Uid:1dcc1944-d311-4675-847c-b5f3f2ece986,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\"" Feb 12 21:57:10.643964 env[1721]: time="2024-02-12T21:57:10.643916001Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 21:57:10.667826 env[1721]: time="2024-02-12T21:57:10.667783401Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5a5ccbf697b6553e96b924495a6414284edd0e948b5deeda06e65a7771c47bb\"" Feb 12 21:57:10.669186 env[1721]: time="2024-02-12T21:57:10.669113085Z" level=info msg="StartContainer for \"e5a5ccbf697b6553e96b924495a6414284edd0e948b5deeda06e65a7771c47bb\"" Feb 12 21:57:10.763875 env[1721]: time="2024-02-12T21:57:10.760214152Z" level=info msg="StartContainer for \"e5a5ccbf697b6553e96b924495a6414284edd0e948b5deeda06e65a7771c47bb\" returns successfully" Feb 12 21:57:10.849083 env[1721]: time="2024-02-12T21:57:10.848947303Z" level=info msg="shim disconnected" id=e5a5ccbf697b6553e96b924495a6414284edd0e948b5deeda06e65a7771c47bb Feb 12 21:57:10.849083 env[1721]: time="2024-02-12T21:57:10.849006565Z" level=warning msg="cleaning up after shim disconnected" id=e5a5ccbf697b6553e96b924495a6414284edd0e948b5deeda06e65a7771c47bb namespace=k8s.io Feb 12 21:57:10.849083 env[1721]: time="2024-02-12T21:57:10.849019101Z" level=info msg="cleaning up dead shim" Feb 12 21:57:10.869615 env[1721]: time="2024-02-12T21:57:10.869569276Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5158 runtime=io.containerd.runc.v2\n" Feb 12 21:57:11.161396 env[1721]: time="2024-02-12T21:57:11.161260141Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 21:57:11.216068 env[1721]: time="2024-02-12T21:57:11.216013938Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"602bc3bdd6cc464a904b63591a7c0ade7e002b5bc52154362869a96fd9650e4c\"" Feb 12 21:57:11.219385 env[1721]: time="2024-02-12T21:57:11.219347404Z" level=info msg="StartContainer for \"602bc3bdd6cc464a904b63591a7c0ade7e002b5bc52154362869a96fd9650e4c\"" Feb 12 21:57:11.326512 env[1721]: time="2024-02-12T21:57:11.326454514Z" level=info msg="StartContainer for \"602bc3bdd6cc464a904b63591a7c0ade7e002b5bc52154362869a96fd9650e4c\" returns successfully" Feb 12 21:57:11.394875 env[1721]: time="2024-02-12T21:57:11.394828801Z" level=info msg="shim disconnected" id=602bc3bdd6cc464a904b63591a7c0ade7e002b5bc52154362869a96fd9650e4c Feb 12 21:57:11.395172 env[1721]: time="2024-02-12T21:57:11.395147844Z" level=warning msg="cleaning up after shim disconnected" id=602bc3bdd6cc464a904b63591a7c0ade7e002b5bc52154362869a96fd9650e4c namespace=k8s.io Feb 12 21:57:11.395253 env[1721]: time="2024-02-12T21:57:11.395237778Z" level=info msg="cleaning up dead shim" Feb 12 21:57:11.406886 env[1721]: time="2024-02-12T21:57:11.406838463Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5218 runtime=io.containerd.runc.v2\n" Feb 12 21:57:11.592602 kubelet[2960]: I0212 21:57:11.592502 2960 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7406c106-7700-40d1-a990-ebd98af6e3ad path="/var/lib/kubelet/pods/7406c106-7700-40d1-a990-ebd98af6e3ad/volumes" Feb 12 21:57:12.169421 env[1721]: time="2024-02-12T21:57:12.169364742Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 21:57:12.207175 env[1721]: time="2024-02-12T21:57:12.207123130Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0\"" Feb 12 21:57:12.208857 env[1721]: time="2024-02-12T21:57:12.207765991Z" level=info msg="StartContainer for \"8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0\"" Feb 12 21:57:12.293852 env[1721]: time="2024-02-12T21:57:12.293807227Z" level=info msg="StartContainer for \"8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0\" returns successfully" Feb 12 21:57:12.350316 env[1721]: time="2024-02-12T21:57:12.350234472Z" level=info msg="shim disconnected" id=8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0 Feb 12 21:57:12.350316 env[1721]: time="2024-02-12T21:57:12.350316848Z" level=warning msg="cleaning up after shim disconnected" id=8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0 namespace=k8s.io Feb 12 21:57:12.350629 env[1721]: time="2024-02-12T21:57:12.350329338Z" level=info msg="cleaning up dead shim" Feb 12 21:57:12.359699 env[1721]: time="2024-02-12T21:57:12.359641026Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5277 runtime=io.containerd.runc.v2\n" Feb 12 21:57:12.386735 systemd[1]: run-containerd-runc-k8s.io-8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0-runc.sxXmC7.mount: Deactivated successfully. Feb 12 21:57:12.386922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cd2ac25ae222bd9bbd7f55b14db52ea02aa56b9af3e0a5431b00423e22f0fb0-rootfs.mount: Deactivated successfully. Feb 12 21:57:13.168778 env[1721]: time="2024-02-12T21:57:13.168736019Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 21:57:13.215876 env[1721]: time="2024-02-12T21:57:13.215804490Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b\"" Feb 12 21:57:13.217453 env[1721]: time="2024-02-12T21:57:13.217415710Z" level=info msg="StartContainer for \"29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b\"" Feb 12 21:57:13.306494 env[1721]: time="2024-02-12T21:57:13.306416133Z" level=info msg="StartContainer for \"29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b\" returns successfully" Feb 12 21:57:13.364485 env[1721]: time="2024-02-12T21:57:13.364385348Z" level=info msg="shim disconnected" id=29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b Feb 12 21:57:13.364485 env[1721]: time="2024-02-12T21:57:13.364480354Z" level=warning msg="cleaning up after shim disconnected" id=29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b namespace=k8s.io Feb 12 21:57:13.364830 env[1721]: time="2024-02-12T21:57:13.364495178Z" level=info msg="cleaning up dead shim" Feb 12 21:57:13.386552 systemd[1]: run-containerd-runc-k8s.io-29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b-runc.jJwvcb.mount: Deactivated successfully. Feb 12 21:57:13.386749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29103b95f261a9ed5e9aeccf218418194f5674691cf2ce24b1edf732bf66d16b-rootfs.mount: Deactivated successfully. Feb 12 21:57:13.388804 env[1721]: time="2024-02-12T21:57:13.388593587Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5335 runtime=io.containerd.runc.v2\n" Feb 12 21:57:13.760800 kubelet[2960]: E0212 21:57:13.760557 2960 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 21:57:14.196406 env[1721]: time="2024-02-12T21:57:14.196339846Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 21:57:14.228026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123318819.mount: Deactivated successfully. Feb 12 21:57:14.230788 env[1721]: time="2024-02-12T21:57:14.230742589Z" level=info msg="CreateContainer within sandbox \"1c51a47b64bb20a1b2e708ea6954d05f0fe9bc7812641948ac890a99ba0bd981\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7\"" Feb 12 21:57:14.234954 env[1721]: time="2024-02-12T21:57:14.231705701Z" level=info msg="StartContainer for \"d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7\"" Feb 12 21:57:14.334229 env[1721]: time="2024-02-12T21:57:14.334156614Z" level=info msg="StartContainer for \"d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7\" returns successfully" Feb 12 21:57:14.388844 systemd[1]: run-containerd-runc-k8s.io-d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7-runc.uqEfD6.mount: Deactivated successfully. Feb 12 21:57:15.357292 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 21:57:15.972001 kubelet[2960]: I0212 21:57:15.971959 2960 setters.go:548] "Node became not ready" node="ip-172-31-30-174" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 21:57:15.971911139 +0000 UTC m=+142.855266526 LastTransitionTime:2024-02-12 21:57:15.971911139 +0000 UTC m=+142.855266526 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 21:57:17.780826 kubelet[2960]: E0212 21:57:17.780790 2960 upgradeaware.go:440] Error proxying data from backend to client: read tcp 127.0.0.1:55210->127.0.0.1:42243: read: connection reset by peer Feb 12 21:57:18.643152 (udev-worker)[5922]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:57:18.647334 (udev-worker)[5923]: Network interface NamePolicy= disabled on kernel command line. Feb 12 21:57:18.666821 systemd-networkd[1507]: lxc_health: Link UP Feb 12 21:57:18.676859 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 21:57:18.676712 systemd-networkd[1507]: lxc_health: Gained carrier Feb 12 21:57:19.913782 systemd[1]: run-containerd-runc-k8s.io-d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7-runc.P1CanK.mount: Deactivated successfully. Feb 12 21:57:20.550398 systemd-networkd[1507]: lxc_health: Gained IPv6LL Feb 12 21:57:20.572726 kubelet[2960]: I0212 21:57:20.572689 2960 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-j8x5m" podStartSLOduration=10.57131142 pod.CreationTimestamp="2024-02-12 21:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 21:57:15.212659259 +0000 UTC m=+142.096014660" watchObservedRunningTime="2024-02-12 21:57:20.57131142 +0000 UTC m=+147.454666821" Feb 12 21:57:22.261199 systemd[1]: run-containerd-runc-k8s.io-d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7-runc.n44tz1.mount: Deactivated successfully. Feb 12 21:57:23.120798 update_engine[1710]: I0212 21:57:23.120735 1710 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 12 21:57:23.121378 update_engine[1710]: I0212 21:57:23.120817 1710 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 12 21:57:23.126182 update_engine[1710]: I0212 21:57:23.126142 1710 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 12 21:57:23.127507 update_engine[1710]: I0212 21:57:23.127478 1710 omaha_request_params.cc:62] Current group set to lts Feb 12 21:57:23.131870 update_engine[1710]: I0212 21:57:23.131706 1710 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 12 21:57:23.131870 update_engine[1710]: I0212 21:57:23.131726 1710 update_attempter.cc:643] Scheduling an action processor start. Feb 12 21:57:23.131870 update_engine[1710]: I0212 21:57:23.131748 1710 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 12 21:57:23.135569 update_engine[1710]: I0212 21:57:23.135533 1710 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 12 21:57:23.135693 update_engine[1710]: I0212 21:57:23.135651 1710 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 12 21:57:23.135693 update_engine[1710]: I0212 21:57:23.135660 1710 omaha_request_action.cc:271] Request: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: Feb 12 21:57:23.135693 update_engine[1710]: I0212 21:57:23.135666 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 21:57:23.152118 update_engine[1710]: I0212 21:57:23.152078 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 21:57:23.154236 update_engine[1710]: I0212 21:57:23.154198 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 21:57:23.183127 locksmithd[1780]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 12 21:57:23.241790 update_engine[1710]: E0212 21:57:23.241613 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 21:57:23.241790 update_engine[1710]: I0212 21:57:23.241748 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 12 21:57:24.505091 systemd[1]: run-containerd-runc-k8s.io-d3c878eb5aa4a3cd367253c33952bc5fb62adcf110bb3f7e7bd207d4f35a01e7-runc.ZQjxVP.mount: Deactivated successfully. Feb 12 21:57:24.721033 sshd[4980]: pam_unix(sshd:session): session closed for user core Feb 12 21:57:24.727114 systemd-logind[1709]: Session 29 logged out. Waiting for processes to exit. Feb 12 21:57:24.729117 systemd[1]: sshd@28-172.31.30.174:22-139.178.89.65:40884.service: Deactivated successfully. Feb 12 21:57:24.730380 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 21:57:24.732453 systemd-logind[1709]: Removed session 29. Feb 12 21:57:33.106868 update_engine[1710]: I0212 21:57:33.106802 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 21:57:33.107377 update_engine[1710]: I0212 21:57:33.107074 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 21:57:33.107377 update_engine[1710]: I0212 21:57:33.107322 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 21:57:33.107716 update_engine[1710]: E0212 21:57:33.107693 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 21:57:33.107811 update_engine[1710]: I0212 21:57:33.107793 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 12 21:57:39.170459 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2-rootfs.mount: Deactivated successfully. Feb 12 21:57:39.201245 env[1721]: time="2024-02-12T21:57:39.201184890Z" level=info msg="shim disconnected" id=12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2 Feb 12 21:57:39.201245 env[1721]: time="2024-02-12T21:57:39.201240998Z" level=warning msg="cleaning up after shim disconnected" id=12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2 namespace=k8s.io Feb 12 21:57:39.201245 env[1721]: time="2024-02-12T21:57:39.201255126Z" level=info msg="cleaning up dead shim" Feb 12 21:57:39.211729 env[1721]: time="2024-02-12T21:57:39.211680513Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6037 runtime=io.containerd.runc.v2\n" Feb 12 21:57:39.262566 kubelet[2960]: I0212 21:57:39.262433 2960 scope.go:115] "RemoveContainer" containerID="12103babb5c780d35e89f0f3058c084da5c499e12227bbf79663186964279dc2" Feb 12 21:57:39.265713 env[1721]: time="2024-02-12T21:57:39.265664532Z" level=info msg="CreateContainer within sandbox \"cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 21:57:39.298314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107220496.mount: Deactivated successfully. Feb 12 21:57:39.300209 env[1721]: time="2024-02-12T21:57:39.300162859Z" level=info msg="CreateContainer within sandbox \"cb267c3c3340aaf280056dbacbfe67560c878ad199d969131316bbd9bb7ca799\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e7b6777803657b4064538dfa434296b28cceb4aa296f86c4eda5c35cb067d94c\"" Feb 12 21:57:39.300878 env[1721]: time="2024-02-12T21:57:39.300847858Z" level=info msg="StartContainer for \"e7b6777803657b4064538dfa434296b28cceb4aa296f86c4eda5c35cb067d94c\"" Feb 12 21:57:39.409247 env[1721]: time="2024-02-12T21:57:39.409187337Z" level=info msg="StartContainer for \"e7b6777803657b4064538dfa434296b28cceb4aa296f86c4eda5c35cb067d94c\" returns successfully" Feb 12 21:57:43.108922 update_engine[1710]: I0212 21:57:43.108860 1710 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 12 21:57:43.109425 update_engine[1710]: I0212 21:57:43.109133 1710 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 12 21:57:43.109425 update_engine[1710]: I0212 21:57:43.109380 1710 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 12 21:57:43.109862 update_engine[1710]: E0212 21:57:43.109838 1710 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 12 21:57:43.109973 update_engine[1710]: I0212 21:57:43.109964 1710 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 12 21:57:43.706873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339-rootfs.mount: Deactivated successfully. Feb 12 21:57:43.735184 env[1721]: time="2024-02-12T21:57:43.735123588Z" level=info msg="shim disconnected" id=313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339 Feb 12 21:57:43.735184 env[1721]: time="2024-02-12T21:57:43.735182677Z" level=warning msg="cleaning up after shim disconnected" id=313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339 namespace=k8s.io Feb 12 21:57:43.735824 env[1721]: time="2024-02-12T21:57:43.735194553Z" level=info msg="cleaning up dead shim" Feb 12 21:57:43.744972 env[1721]: time="2024-02-12T21:57:43.744925745Z" level=warning msg="cleanup warnings time=\"2024-02-12T21:57:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6097 runtime=io.containerd.runc.v2\n" Feb 12 21:57:44.276448 kubelet[2960]: I0212 21:57:44.276409 2960 scope.go:115] "RemoveContainer" containerID="313ea9fb70a56ca296f5612e4953af094306f526fc3452759f55b55d4cdc9339" Feb 12 21:57:44.280391 env[1721]: time="2024-02-12T21:57:44.280347396Z" level=info msg="CreateContainer within sandbox \"7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 21:57:44.300681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382551985.mount: Deactivated successfully. Feb 12 21:57:44.311290 env[1721]: time="2024-02-12T21:57:44.311224117Z" level=info msg="CreateContainer within sandbox \"7aa0303bf8e62833cdfc85cbdf596c5a1c620883b4d429a9dda87ddb61fcc285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f14e5ae5346d3b24cd1a38a4c2c4602b836db4435464ccb7959d45722a7d2eef\"" Feb 12 21:57:44.311959 env[1721]: time="2024-02-12T21:57:44.311924177Z" level=info msg="StartContainer for \"f14e5ae5346d3b24cd1a38a4c2c4602b836db4435464ccb7959d45722a7d2eef\"" Feb 12 21:57:44.470919 env[1721]: time="2024-02-12T21:57:44.470606728Z" level=info msg="StartContainer for \"f14e5ae5346d3b24cd1a38a4c2c4602b836db4435464ccb7959d45722a7d2eef\" returns successfully" Feb 12 21:57:46.938307 kubelet[2960]: E0212 21:57:46.937415 2960 controller.go:189] failed to update lease, error: Put "https://172.31.30.174:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-174?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)