Dec 13 14:27:12.110495 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:27:12.110518 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:12.110528 kernel: BIOS-provided physical RAM map: Dec 13 14:27:12.110534 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:27:12.110540 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:27:12.110546 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:27:12.110555 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:27:12.110562 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:27:12.110568 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:27:12.110574 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:27:12.110580 kernel: NX (Execute Disable) protection: active Dec 13 14:27:12.110586 kernel: SMBIOS 2.7 present. Dec 13 14:27:12.110592 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:27:12.110599 kernel: Hypervisor detected: KVM Dec 13 14:27:12.110609 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:27:12.110616 kernel: kvm-clock: cpu 0, msr 4419a001, primary cpu clock Dec 13 14:27:12.110623 kernel: kvm-clock: using sched offset of 8940339839 cycles Dec 13 14:27:12.110630 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:27:12.110637 kernel: tsc: Detected 2499.996 MHz processor Dec 13 14:27:12.110644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:27:12.110654 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:27:12.110661 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:27:12.110668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:27:12.110675 kernel: Using GB pages for direct mapping Dec 13 14:27:12.110682 kernel: ACPI: Early table checksum verification disabled Dec 13 14:27:12.110689 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:27:12.110696 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:27:12.110703 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:27:12.110709 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:27:12.110718 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:27:12.110725 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:27:12.110732 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:27:12.110739 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:27:12.110746 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:27:12.110752 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:27:12.110759 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:27:12.110766 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:27:12.110775 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:27:12.110782 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:27:12.110789 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:27:12.110799 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:27:12.110806 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:27:12.110813 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:27:12.110821 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:27:12.110831 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:27:12.110838 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:27:12.110845 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:27:12.110923 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:27:12.110931 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:27:12.110939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:27:12.110946 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:27:12.110981 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:27:12.110997 kernel: Zone ranges: Dec 13 14:27:12.111005 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:27:12.111013 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:27:12.111021 kernel: Normal empty Dec 13 14:27:12.111029 kernel: Movable zone start for each node Dec 13 14:27:12.111036 kernel: Early memory node ranges Dec 13 14:27:12.111043 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:27:12.111051 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:27:12.111058 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:27:12.111068 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:27:12.111076 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:27:12.111083 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:27:12.111090 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:27:12.111098 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:27:12.111105 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:27:12.111113 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:27:12.111120 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:27:12.111128 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:27:12.111138 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:27:12.111145 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:27:12.111153 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:27:12.111217 kernel: TSC deadline timer available Dec 13 14:27:12.111227 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:27:12.111247 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:27:12.111255 kernel: Booting paravirtualized kernel on KVM Dec 13 14:27:12.111263 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:27:12.111270 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:27:12.111278 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:27:12.111289 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:27:12.111296 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:27:12.111303 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:27:12.111311 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:27:12.111318 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:27:12.111326 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:27:12.111333 kernel: Policy zone: DMA32 Dec 13 14:27:12.111342 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:12.111352 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:27:12.111359 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:27:12.111367 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:27:12.111374 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:27:12.111382 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:27:12.111389 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:27:12.111397 kernel: Kernel/User page tables isolation: enabled Dec 13 14:27:12.111404 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:27:12.111414 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:27:12.111421 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:27:12.111429 kernel: rcu: RCU event tracing is enabled. Dec 13 14:27:12.111437 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:27:12.111445 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:27:12.111453 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:27:12.111460 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:27:12.111468 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:27:12.111475 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:27:12.111485 kernel: random: crng init done Dec 13 14:27:12.111493 kernel: Console: colour VGA+ 80x25 Dec 13 14:27:12.111500 kernel: printk: console [ttyS0] enabled Dec 13 14:27:12.111508 kernel: ACPI: Core revision 20210730 Dec 13 14:27:12.111515 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:27:12.111523 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:27:12.111530 kernel: x2apic enabled Dec 13 14:27:12.111537 kernel: Switched APIC routing to physical x2apic. Dec 13 14:27:12.111545 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:27:12.111552 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 14:27:12.111562 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:27:12.111569 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:27:12.111577 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:27:12.111592 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:27:12.111602 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:27:12.111610 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:27:12.111618 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:27:12.111626 kernel: RETBleed: Vulnerable Dec 13 14:27:12.111634 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:27:12.111641 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:27:12.111649 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:27:12.111656 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:27:12.111664 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:27:12.111674 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:27:12.111682 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:27:12.111690 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:27:12.111698 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:27:12.111705 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:27:12.111713 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:27:12.111723 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:27:12.111731 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:27:12.111739 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:27:12.111746 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:27:12.111754 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:27:12.111762 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:27:12.111769 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:27:12.111777 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:27:12.111785 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:27:12.111793 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:27:12.111801 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:27:12.111810 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:27:12.111818 kernel: LSM: Security Framework initializing Dec 13 14:27:12.111826 kernel: SELinux: Initializing. Dec 13 14:27:12.111834 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:12.111841 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:27:12.111849 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:27:12.111857 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:27:12.111865 kernel: signal: max sigframe size: 3632 Dec 13 14:27:12.111873 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:27:12.111881 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:27:12.111888 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:27:12.111899 kernel: x86: Booting SMP configuration: Dec 13 14:27:12.111916 kernel: .... node #0, CPUs: #1 Dec 13 14:27:12.111924 kernel: kvm-clock: cpu 1, msr 4419a041, secondary cpu clock Dec 13 14:27:12.111932 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:27:12.111941 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:27:12.111949 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:27:12.111957 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:27:12.111965 kernel: smpboot: Max logical packages: 1 Dec 13 14:27:12.111975 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 14:27:12.111983 kernel: devtmpfs: initialized Dec 13 14:27:12.111991 kernel: x86/mm: Memory block size: 128MB Dec 13 14:27:12.111999 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:27:12.112007 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:27:12.112015 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:27:12.112023 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:27:12.112031 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:27:12.112039 kernel: audit: type=2000 audit(1734100030.997:1): state=initialized audit_enabled=0 res=1 Dec 13 14:27:12.112049 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:27:12.112057 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:27:12.112065 kernel: cpuidle: using governor menu Dec 13 14:27:12.112073 kernel: ACPI: bus type PCI registered Dec 13 14:27:12.112081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:27:12.112089 kernel: dca service started, version 1.12.1 Dec 13 14:27:12.112097 kernel: PCI: Using configuration type 1 for base access Dec 13 14:27:12.112104 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:27:12.112112 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:27:12.112123 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:27:12.112130 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:27:12.112138 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:27:12.112146 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:27:12.112154 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:27:12.112162 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:27:12.112170 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:27:12.112178 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:27:12.112186 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:27:12.112196 kernel: ACPI: Interpreter enabled Dec 13 14:27:12.112204 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:27:12.112212 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:27:12.112220 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:27:12.112228 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:27:12.112235 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:27:12.112391 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:27:12.112480 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:27:12.112493 kernel: acpiphp: Slot [3] registered Dec 13 14:27:12.112501 kernel: acpiphp: Slot [4] registered Dec 13 14:27:12.112509 kernel: acpiphp: Slot [5] registered Dec 13 14:27:12.112517 kernel: acpiphp: Slot [6] registered Dec 13 14:27:12.112525 kernel: acpiphp: Slot [7] registered Dec 13 14:27:12.112533 kernel: acpiphp: Slot [8] registered Dec 13 14:27:12.112541 kernel: acpiphp: Slot [9] registered Dec 13 14:27:12.112548 kernel: acpiphp: Slot [10] registered Dec 13 14:27:12.112556 kernel: acpiphp: Slot [11] registered Dec 13 14:27:12.112567 kernel: acpiphp: Slot [12] registered Dec 13 14:27:12.112575 kernel: acpiphp: Slot [13] registered Dec 13 14:27:12.112583 kernel: acpiphp: Slot [14] registered Dec 13 14:27:12.112590 kernel: acpiphp: Slot [15] registered Dec 13 14:27:12.112598 kernel: acpiphp: Slot [16] registered Dec 13 14:27:12.112606 kernel: acpiphp: Slot [17] registered Dec 13 14:27:12.112614 kernel: acpiphp: Slot [18] registered Dec 13 14:27:12.112622 kernel: acpiphp: Slot [19] registered Dec 13 14:27:12.112630 kernel: acpiphp: Slot [20] registered Dec 13 14:27:12.112640 kernel: acpiphp: Slot [21] registered Dec 13 14:27:12.112648 kernel: acpiphp: Slot [22] registered Dec 13 14:27:12.112656 kernel: acpiphp: Slot [23] registered Dec 13 14:27:12.112663 kernel: acpiphp: Slot [24] registered Dec 13 14:27:12.112671 kernel: acpiphp: Slot [25] registered Dec 13 14:27:12.112679 kernel: acpiphp: Slot [26] registered Dec 13 14:27:12.112686 kernel: acpiphp: Slot [27] registered Dec 13 14:27:12.112694 kernel: acpiphp: Slot [28] registered Dec 13 14:27:12.112702 kernel: acpiphp: Slot [29] registered Dec 13 14:27:12.112710 kernel: acpiphp: Slot [30] registered Dec 13 14:27:12.112720 kernel: acpiphp: Slot [31] registered Dec 13 14:27:12.112728 kernel: PCI host bridge to bus 0000:00 Dec 13 14:27:12.112814 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:27:12.112888 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:27:12.112974 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:27:12.113046 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:27:12.113119 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:27:12.113215 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:27:12.113307 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:27:12.113396 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:27:12.113478 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:27:12.113560 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:27:12.113640 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:27:12.113720 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:27:12.113803 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:27:12.113884 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:27:12.114198 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:27:12.114285 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:27:12.114370 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:27:12.114452 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:27:12.114533 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:27:12.114619 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:27:12.114709 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:27:12.114791 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:27:12.114875 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:27:12.114968 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:27:12.114979 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:27:12.114990 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:27:12.114999 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:27:12.115007 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:27:12.115015 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:27:12.115023 kernel: iommu: Default domain type: Translated Dec 13 14:27:12.115032 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:27:12.115113 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:27:12.115293 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:27:12.115381 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:27:12.115396 kernel: vgaarb: loaded Dec 13 14:27:12.115404 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:27:12.115413 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:27:12.115421 kernel: PTP clock support registered Dec 13 14:27:12.115429 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:27:12.115437 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:27:12.115446 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:27:12.115454 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:27:12.115462 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:27:12.115472 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:27:12.115480 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:27:12.115489 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:27:12.115497 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:27:12.115505 kernel: pnp: PnP ACPI init Dec 13 14:27:12.115513 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:27:12.115522 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:27:12.115530 kernel: NET: Registered PF_INET protocol family Dec 13 14:27:12.115540 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:27:12.115549 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:27:12.115557 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:27:12.115565 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:27:12.115573 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:27:12.115582 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:27:12.115590 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:12.115598 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:27:12.115606 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:27:12.115616 kernel: NET: Registered PF_XDP protocol family Dec 13 14:27:12.115695 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:27:12.115768 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:27:12.115841 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:27:12.115925 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:27:12.116012 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:27:12.116098 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:27:12.116111 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:27:12.116120 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:27:12.116129 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:27:12.116137 kernel: clocksource: Switched to clocksource tsc Dec 13 14:27:12.116145 kernel: Initialise system trusted keyrings Dec 13 14:27:12.116153 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:27:12.116161 kernel: Key type asymmetric registered Dec 13 14:27:12.116169 kernel: Asymmetric key parser 'x509' registered Dec 13 14:27:12.116177 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:27:12.116187 kernel: io scheduler mq-deadline registered Dec 13 14:27:12.116195 kernel: io scheduler kyber registered Dec 13 14:27:12.116203 kernel: io scheduler bfq registered Dec 13 14:27:12.116211 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:27:12.116219 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:27:12.116228 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:27:12.116236 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:27:12.116244 kernel: i8042: Warning: Keylock active Dec 13 14:27:12.116252 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:27:12.116260 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:27:12.116348 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:27:12.116426 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:27:12.116502 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:27:11 UTC (1734100031) Dec 13 14:27:12.116577 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:27:12.116587 kernel: intel_pstate: CPU model not supported Dec 13 14:27:12.116596 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:27:12.116666 kernel: Segment Routing with IPv6 Dec 13 14:27:12.116681 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:27:12.116690 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:27:12.116698 kernel: Key type dns_resolver registered Dec 13 14:27:12.116706 kernel: IPI shorthand broadcast: enabled Dec 13 14:27:12.116715 kernel: sched_clock: Marking stable (624846171, 250525604)->(1115870022, -240498247) Dec 13 14:27:12.116723 kernel: registered taskstats version 1 Dec 13 14:27:12.116732 kernel: Loading compiled-in X.509 certificates Dec 13 14:27:12.116740 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:27:12.116748 kernel: Key type .fscrypt registered Dec 13 14:27:12.116759 kernel: Key type fscrypt-provisioning registered Dec 13 14:27:12.116767 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:27:12.116775 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:27:12.116783 kernel: ima: No architecture policies found Dec 13 14:27:12.116792 kernel: clk: Disabling unused clocks Dec 13 14:27:12.116800 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:27:12.116808 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:27:12.116816 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:27:12.116824 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:27:12.116834 kernel: Run /init as init process Dec 13 14:27:12.116842 kernel: with arguments: Dec 13 14:27:12.116850 kernel: /init Dec 13 14:27:12.116858 kernel: with environment: Dec 13 14:27:12.116865 kernel: HOME=/ Dec 13 14:27:12.116873 kernel: TERM=linux Dec 13 14:27:12.116881 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:27:12.116892 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:12.116916 systemd[1]: Detected virtualization amazon. Dec 13 14:27:12.116925 systemd[1]: Detected architecture x86-64. Dec 13 14:27:12.116933 systemd[1]: Running in initrd. Dec 13 14:27:12.116941 systemd[1]: No hostname configured, using default hostname. Dec 13 14:27:12.116962 systemd[1]: Hostname set to . Dec 13 14:27:12.116974 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:12.116984 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:27:12.116993 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:27:12.117002 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:12.117011 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:12.117020 systemd[1]: Reached target paths.target. Dec 13 14:27:12.117028 systemd[1]: Reached target slices.target. Dec 13 14:27:12.117037 systemd[1]: Reached target swap.target. Dec 13 14:27:12.117045 systemd[1]: Reached target timers.target. Dec 13 14:27:12.117057 systemd[1]: Listening on iscsid.socket. Dec 13 14:27:12.117065 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:27:12.117074 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:27:12.117083 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:27:12.117092 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:27:12.117100 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:12.117109 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:12.117118 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:12.117128 systemd[1]: Reached target sockets.target. Dec 13 14:27:12.117137 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:12.117145 systemd[1]: Finished network-cleanup.service. Dec 13 14:27:12.117154 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:27:12.117163 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:12.117171 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:12.117180 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:12.117197 systemd-journald[185]: Journal started Dec 13 14:27:12.117256 systemd-journald[185]: Runtime Journal (/run/log/journal/ec25436fcf574512b8523448b1bca7a2) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:27:12.138926 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:27:12.144929 systemd[1]: Started systemd-journald.service. Dec 13 14:27:12.145937 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:27:12.157553 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.181057 kernel: audit: type=1130 audit(1734100032.154:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.183581 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:27:12.369618 kernel: audit: type=1130 audit(1734100032.181:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.369657 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:27:12.369675 kernel: Bridge firewalling registered Dec 13 14:27:12.369701 kernel: SCSI subsystem initialized Dec 13 14:27:12.369719 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:27:12.369736 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:27:12.369751 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:27:12.229810 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:27:12.385863 kernel: audit: type=1130 audit(1734100032.370:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.385944 kernel: audit: type=1130 audit(1734100032.374:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.385963 kernel: audit: type=1130 audit(1734100032.379:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.267552 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:27:12.392550 kernel: audit: type=1130 audit(1734100032.386:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.267572 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:12.270018 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:12.278492 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:27:12.287270 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:27:12.371624 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:12.376252 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:12.380657 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:27:12.392743 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:12.412523 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:27:12.416844 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:12.419057 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:12.445845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:12.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.458925 kernel: audit: type=1130 audit(1734100032.444:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.461859 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:12.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.472576 kernel: audit: type=1130 audit(1734100032.460:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.490216 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:27:12.497058 kernel: audit: type=1130 audit(1734100032.490:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.492073 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:27:12.507938 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:27:12.510597 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:27:12.610013 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:27:12.658017 kernel: iscsi: registered transport (tcp) Dec 13 14:27:12.706491 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:27:12.706562 kernel: QLogic iSCSI HBA Driver Dec 13 14:27:12.739439 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:27:12.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:12.742674 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:27:12.794933 kernel: raid6: avx512x4 gen() 15978 MB/s Dec 13 14:27:12.811936 kernel: raid6: avx512x4 xor() 6739 MB/s Dec 13 14:27:12.829042 kernel: raid6: avx512x2 gen() 17367 MB/s Dec 13 14:27:12.845933 kernel: raid6: avx512x2 xor() 23311 MB/s Dec 13 14:27:12.862933 kernel: raid6: avx512x1 gen() 15498 MB/s Dec 13 14:27:12.879933 kernel: raid6: avx512x1 xor() 20891 MB/s Dec 13 14:27:12.896934 kernel: raid6: avx2x4 gen() 15410 MB/s Dec 13 14:27:12.914933 kernel: raid6: avx2x4 xor() 7004 MB/s Dec 13 14:27:12.935938 kernel: raid6: avx2x2 gen() 15199 MB/s Dec 13 14:27:12.953955 kernel: raid6: avx2x2 xor() 6794 MB/s Dec 13 14:27:12.971367 kernel: raid6: avx2x1 gen() 7117 MB/s Dec 13 14:27:12.987934 kernel: raid6: avx2x1 xor() 8959 MB/s Dec 13 14:27:13.004931 kernel: raid6: sse2x4 gen() 9275 MB/s Dec 13 14:27:13.021930 kernel: raid6: sse2x4 xor() 5874 MB/s Dec 13 14:27:13.038928 kernel: raid6: sse2x2 gen() 10319 MB/s Dec 13 14:27:13.055938 kernel: raid6: sse2x2 xor() 5540 MB/s Dec 13 14:27:13.072930 kernel: raid6: sse2x1 gen() 9057 MB/s Dec 13 14:27:13.090607 kernel: raid6: sse2x1 xor() 4648 MB/s Dec 13 14:27:13.090679 kernel: raid6: using algorithm avx512x2 gen() 17367 MB/s Dec 13 14:27:13.090696 kernel: raid6: .... xor() 23311 MB/s, rmw enabled Dec 13 14:27:13.091392 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:27:13.106929 kernel: xor: automatically using best checksumming function avx Dec 13 14:27:13.216934 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:27:13.226444 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:27:13.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.227000 audit: BPF prog-id=7 op=LOAD Dec 13 14:27:13.227000 audit: BPF prog-id=8 op=LOAD Dec 13 14:27:13.229514 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:13.244882 systemd-udevd[383]: Using default interface naming scheme 'v252'. Dec 13 14:27:13.250988 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:13.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.253399 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:27:13.271335 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation Dec 13 14:27:13.303809 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:27:13.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.307640 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:13.370521 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:13.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:13.510940 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:27:13.533789 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:27:13.534105 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:27:13.570515 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:27:13.570664 kernel: AES CTR mode by8 optimization enabled Dec 13 14:27:13.570682 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:27:13.570802 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:27:13.570957 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:27:13.570976 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:d1:ff:de:1a:0d Dec 13 14:27:13.604092 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:27:13.604345 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:27:13.604376 kernel: GPT:9289727 != 16777215 Dec 13 14:27:13.604392 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:27:13.604409 kernel: GPT:9289727 != 16777215 Dec 13 14:27:13.604425 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:27:13.604441 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:13.586207 (udev-worker)[429]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:13.733330 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (432) Dec 13 14:27:13.695380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:27:13.759560 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:27:13.784009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:27:13.790240 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:27:13.793465 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:27:13.801512 systemd[1]: Starting disk-uuid.service... Dec 13 14:27:13.808424 disk-uuid[585]: Primary Header is updated. Dec 13 14:27:13.808424 disk-uuid[585]: Secondary Entries is updated. Dec 13 14:27:13.808424 disk-uuid[585]: Secondary Header is updated. Dec 13 14:27:13.814928 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:13.822926 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:13.830934 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:14.835930 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:27:14.836064 disk-uuid[586]: The operation has completed successfully. Dec 13 14:27:15.005839 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:27:15.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.005965 systemd[1]: Finished disk-uuid.service. Dec 13 14:27:15.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.029037 systemd[1]: Starting verity-setup.service... Dec 13 14:27:15.048947 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:27:15.143882 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:27:15.145999 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:27:15.151293 systemd[1]: Finished verity-setup.service. Dec 13 14:27:15.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.289110 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:27:15.289031 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:27:15.290116 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:27:15.292842 systemd[1]: Starting ignition-setup.service... Dec 13 14:27:15.295799 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:27:15.321284 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:15.321355 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:15.321377 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:15.329933 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:15.346449 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:27:15.402236 systemd[1]: Finished ignition-setup.service. Dec 13 14:27:15.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.404614 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:27:15.440211 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:27:15.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.441000 audit: BPF prog-id=9 op=LOAD Dec 13 14:27:15.443781 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:15.473574 systemd-networkd[1099]: lo: Link UP Dec 13 14:27:15.473586 systemd-networkd[1099]: lo: Gained carrier Dec 13 14:27:15.476051 systemd-networkd[1099]: Enumeration completed Dec 13 14:27:15.476180 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:15.477529 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:15.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.481778 systemd[1]: Reached target network.target. Dec 13 14:27:15.486742 systemd[1]: Starting iscsiuio.service... Dec 13 14:27:15.491170 systemd-networkd[1099]: eth0: Link UP Dec 13 14:27:15.491324 systemd-networkd[1099]: eth0: Gained carrier Dec 13 14:27:15.513282 systemd[1]: Started iscsiuio.service. Dec 13 14:27:15.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.521745 systemd[1]: Starting iscsid.service... Dec 13 14:27:15.524047 systemd-networkd[1099]: eth0: DHCPv4 address 172.31.23.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:27:15.531095 iscsid[1104]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:15.531095 iscsid[1104]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:27:15.531095 iscsid[1104]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:27:15.531095 iscsid[1104]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:27:15.531095 iscsid[1104]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:27:15.531095 iscsid[1104]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:27:15.547772 systemd[1]: Started iscsid.service. Dec 13 14:27:15.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.549850 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:27:15.565899 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:27:15.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:15.567162 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:27:15.569428 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:15.570699 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:15.572947 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:27:15.584769 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:27:15.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.247899 ignition[1073]: Ignition 2.14.0 Dec 13 14:27:16.248277 ignition[1073]: Stage: fetch-offline Dec 13 14:27:16.248490 ignition[1073]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:16.248537 ignition[1073]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:16.268207 ignition[1073]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:16.268850 ignition[1073]: Ignition finished successfully Dec 13 14:27:16.272437 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:27:16.279943 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 13 14:27:16.280682 kernel: audit: type=1130 audit(1734100036.273:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.276480 systemd[1]: Starting ignition-fetch.service... Dec 13 14:27:16.291988 ignition[1123]: Ignition 2.14.0 Dec 13 14:27:16.291997 ignition[1123]: Stage: fetch Dec 13 14:27:16.292147 ignition[1123]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:16.292223 ignition[1123]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:16.301706 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:16.303747 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:16.361086 ignition[1123]: INFO : PUT result: OK Dec 13 14:27:16.370728 ignition[1123]: DEBUG : parsed url from cmdline: "" Dec 13 14:27:16.370728 ignition[1123]: INFO : no config URL provided Dec 13 14:27:16.370728 ignition[1123]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:27:16.370728 ignition[1123]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:27:16.376347 ignition[1123]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:16.377713 ignition[1123]: INFO : PUT result: OK Dec 13 14:27:16.377713 ignition[1123]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:27:16.382134 ignition[1123]: INFO : GET result: OK Dec 13 14:27:16.383096 ignition[1123]: DEBUG : parsing config with SHA512: a94633b0c1f8ce4d9e043d20208360dd958e5fa07dcfad9beccd79e03bfba6108f3c2e397efec1c7a014b25909b128f6561d435b3d235e1314c2739bc026e2c3 Dec 13 14:27:16.393346 unknown[1123]: fetched base config from "system" Dec 13 14:27:16.393363 unknown[1123]: fetched base config from "system" Dec 13 14:27:16.393378 unknown[1123]: fetched user config from "aws" Dec 13 14:27:16.398494 ignition[1123]: fetch: fetch complete Dec 13 14:27:16.398507 ignition[1123]: fetch: fetch passed Dec 13 14:27:16.398577 ignition[1123]: Ignition finished successfully Dec 13 14:27:16.402453 systemd[1]: Finished ignition-fetch.service. Dec 13 14:27:16.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.404835 systemd[1]: Starting ignition-kargs.service... Dec 13 14:27:16.410560 kernel: audit: type=1130 audit(1734100036.400:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.421450 ignition[1129]: Ignition 2.14.0 Dec 13 14:27:16.421460 ignition[1129]: Stage: kargs Dec 13 14:27:16.421610 ignition[1129]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:16.421633 ignition[1129]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:16.454481 ignition[1129]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:16.455893 ignition[1129]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:16.458854 ignition[1129]: INFO : PUT result: OK Dec 13 14:27:16.466259 ignition[1129]: kargs: kargs passed Dec 13 14:27:16.466329 ignition[1129]: Ignition finished successfully Dec 13 14:27:16.475640 systemd[1]: Finished ignition-kargs.service. Dec 13 14:27:16.476835 systemd[1]: Starting ignition-disks.service... Dec 13 14:27:16.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.483562 kernel: audit: type=1130 audit(1734100036.473:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.489035 ignition[1135]: Ignition 2.14.0 Dec 13 14:27:16.489049 ignition[1135]: Stage: disks Dec 13 14:27:16.489356 ignition[1135]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:16.489390 ignition[1135]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:16.502670 ignition[1135]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:16.504212 ignition[1135]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:16.506729 ignition[1135]: INFO : PUT result: OK Dec 13 14:27:16.510505 ignition[1135]: disks: disks passed Dec 13 14:27:16.510563 ignition[1135]: Ignition finished successfully Dec 13 14:27:16.512578 systemd[1]: Finished ignition-disks.service. Dec 13 14:27:16.522625 kernel: audit: type=1130 audit(1734100036.512:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.514134 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:27:16.521090 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:16.522597 systemd[1]: Reached target local-fs.target. Dec 13 14:27:16.523806 systemd[1]: Reached target sysinit.target. Dec 13 14:27:16.525492 systemd[1]: Reached target basic.target. Dec 13 14:27:16.534448 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:27:16.581110 systemd-fsck[1143]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:27:16.584646 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:27:16.592370 kernel: audit: type=1130 audit(1734100036.584:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:16.586754 systemd[1]: Mounting sysroot.mount... Dec 13 14:27:16.612928 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:27:16.613832 systemd[1]: Mounted sysroot.mount. Dec 13 14:27:16.614136 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:27:16.616710 systemd-networkd[1099]: eth0: Gained IPv6LL Dec 13 14:27:16.618860 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:27:16.622453 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:27:16.622529 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:27:16.622567 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:27:16.625492 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:27:16.655207 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:16.658958 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:27:16.678389 initrd-setup-root[1165]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:27:16.682129 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1160) Dec 13 14:27:16.682156 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:16.682203 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:16.683988 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:16.703256 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:16.706504 initrd-setup-root[1191]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:27:16.722540 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:16.726583 initrd-setup-root[1199]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:27:16.734562 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:27:17.027726 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:27:17.036956 kernel: audit: type=1130 audit(1734100037.029:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.031582 systemd[1]: Starting ignition-mount.service... Dec 13 14:27:17.040724 systemd[1]: Starting sysroot-boot.service... Dec 13 14:27:17.051446 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:17.051580 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:17.093670 ignition[1226]: INFO : Ignition 2.14.0 Dec 13 14:27:17.094976 ignition[1226]: INFO : Stage: mount Dec 13 14:27:17.097002 ignition[1226]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:17.102452 ignition[1226]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:17.102318 systemd[1]: Finished sysroot-boot.service. Dec 13 14:27:17.112927 kernel: audit: type=1130 audit(1734100037.106:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.117656 ignition[1226]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:17.119166 ignition[1226]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:17.121725 ignition[1226]: INFO : PUT result: OK Dec 13 14:27:17.125405 ignition[1226]: INFO : mount: mount passed Dec 13 14:27:17.125405 ignition[1226]: INFO : Ignition finished successfully Dec 13 14:27:17.128190 systemd[1]: Finished ignition-mount.service. Dec 13 14:27:17.129966 systemd[1]: Starting ignition-files.service... Dec 13 14:27:17.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.135929 kernel: audit: type=1130 audit(1734100037.126:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:17.141661 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:27:17.159935 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1236) Dec 13 14:27:17.163807 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:27:17.163868 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:27:17.163886 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:27:17.172924 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:27:17.176086 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:27:17.191376 ignition[1255]: INFO : Ignition 2.14.0 Dec 13 14:27:17.191376 ignition[1255]: INFO : Stage: files Dec 13 14:27:17.193560 ignition[1255]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:17.193560 ignition[1255]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:17.208888 ignition[1255]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:17.210999 ignition[1255]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:17.214528 ignition[1255]: INFO : PUT result: OK Dec 13 14:27:17.227576 ignition[1255]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:27:17.232730 ignition[1255]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:27:17.232730 ignition[1255]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:27:17.270548 ignition[1255]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:27:17.275661 ignition[1255]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:27:17.278069 unknown[1255]: wrote ssh authorized keys file for user: core Dec 13 14:27:17.279466 ignition[1255]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:27:17.281833 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:27:17.284287 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:27:17.284287 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:27:17.288758 ignition[1255]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:27:17.392029 ignition[1255]: INFO : GET result: OK Dec 13 14:27:17.573621 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:27:17.573621 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:27:17.580541 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:27:17.584698 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:27:17.589252 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:27:17.589252 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:27:17.595700 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:17.605651 ignition[1255]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3591308842" Dec 13 14:27:17.607656 ignition[1255]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3591308842": device or resource busy Dec 13 14:27:17.607656 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3591308842", trying btrfs: device or resource busy Dec 13 14:27:17.607656 ignition[1255]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3591308842" Dec 13 14:27:17.616948 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1258) Dec 13 14:27:17.616982 ignition[1255]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3591308842" Dec 13 14:27:17.621615 ignition[1255]: INFO : op(3): [started] unmounting "/mnt/oem3591308842" Dec 13 14:27:17.621615 ignition[1255]: INFO : op(3): [finished] unmounting "/mnt/oem3591308842" Dec 13 14:27:17.621615 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:27:17.630396 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:17.630396 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:27:17.630396 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:27:17.630396 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:27:17.630396 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:27:17.630396 ignition[1255]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:27:17.626173 systemd[1]: mnt-oem3591308842.mount: Deactivated successfully. Dec 13 14:27:18.132711 ignition[1255]: INFO : GET result: OK Dec 13 14:27:18.328993 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:27:18.331276 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:18.331276 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:27:18.331276 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:27:18.331276 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:27:18.346658 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:27:18.346658 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:18.359887 ignition[1255]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1283670785" Dec 13 14:27:18.362666 ignition[1255]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1283670785": device or resource busy Dec 13 14:27:18.362666 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1283670785", trying btrfs: device or resource busy Dec 13 14:27:18.362666 ignition[1255]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1283670785" Dec 13 14:27:18.362666 ignition[1255]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1283670785" Dec 13 14:27:18.362666 ignition[1255]: INFO : op(6): [started] unmounting "/mnt/oem1283670785" Dec 13 14:27:18.362666 ignition[1255]: INFO : op(6): [finished] unmounting "/mnt/oem1283670785" Dec 13 14:27:18.362666 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:27:18.362666 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:27:18.384739 ignition[1255]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:27:18.372068 systemd[1]: mnt-oem1283670785.mount: Deactivated successfully. Dec 13 14:27:18.718671 ignition[1255]: INFO : GET result: OK Dec 13 14:27:19.236322 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:27:19.239895 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:27:19.239895 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:19.245801 ignition[1255]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1227061619" Dec 13 14:27:19.247422 ignition[1255]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1227061619": device or resource busy Dec 13 14:27:19.247422 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1227061619", trying btrfs: device or resource busy Dec 13 14:27:19.247422 ignition[1255]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1227061619" Dec 13 14:27:19.253954 ignition[1255]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1227061619" Dec 13 14:27:19.255603 ignition[1255]: INFO : op(9): [started] unmounting "/mnt/oem1227061619" Dec 13 14:27:19.256993 ignition[1255]: INFO : op(9): [finished] unmounting "/mnt/oem1227061619" Dec 13 14:27:19.256993 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:27:19.256993 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:27:19.256993 ignition[1255]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:27:19.269188 systemd[1]: mnt-oem1227061619.mount: Deactivated successfully. Dec 13 14:27:19.291051 ignition[1255]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem954734396" Dec 13 14:27:19.292645 ignition[1255]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem954734396": device or resource busy Dec 13 14:27:19.292645 ignition[1255]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem954734396", trying btrfs: device or resource busy Dec 13 14:27:19.292645 ignition[1255]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem954734396" Dec 13 14:27:19.315036 ignition[1255]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem954734396" Dec 13 14:27:19.315036 ignition[1255]: INFO : op(c): [started] unmounting "/mnt/oem954734396" Dec 13 14:27:19.315036 ignition[1255]: INFO : op(c): [finished] unmounting "/mnt/oem954734396" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(11): [started] processing unit "nvidia.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(11): [finished] processing unit "nvidia.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(12): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(12): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(13): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(13): op(14): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(13): op(14): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(13): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(15): [started] processing unit "containerd.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(15): [finished] processing unit "containerd.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:27:19.315036 ignition[1255]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1a): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1a): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1b): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1b): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:27:19.353661 ignition[1255]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:27:19.368143 systemd[1]: mnt-oem954734396.mount: Deactivated successfully. Dec 13 14:27:19.375104 ignition[1255]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:19.377741 ignition[1255]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:27:19.377741 ignition[1255]: INFO : files: files passed Dec 13 14:27:19.381018 ignition[1255]: INFO : Ignition finished successfully Dec 13 14:27:19.383776 systemd[1]: Finished ignition-files.service. Dec 13 14:27:19.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.389265 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:27:19.394322 kernel: audit: type=1130 audit(1734100039.382:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.393121 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:27:19.393941 systemd[1]: Starting ignition-quench.service... Dec 13 14:27:19.398418 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:27:19.398509 systemd[1]: Finished ignition-quench.service. Dec 13 14:27:19.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.405945 kernel: audit: type=1130 audit(1734100039.399:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.410874 initrd-setup-root-after-ignition[1280]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:27:19.414023 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:27:19.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.416429 systemd[1]: Reached target ignition-complete.target. Dec 13 14:27:19.419355 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:27:19.440612 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:27:19.440721 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:27:19.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.443190 systemd[1]: Reached target initrd-fs.target. Dec 13 14:27:19.445006 systemd[1]: Reached target initrd.target. Dec 13 14:27:19.445110 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:27:19.445967 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:27:19.458322 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:27:19.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.461015 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:27:19.489915 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:27:19.492148 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:27:19.508274 systemd[1]: Stopped target timers.target. Dec 13 14:27:19.508812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:27:19.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.508986 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:27:19.518390 systemd[1]: Stopped target initrd.target. Dec 13 14:27:19.520191 systemd[1]: Stopped target basic.target. Dec 13 14:27:19.522256 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:27:19.524091 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:27:19.527493 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:27:19.529282 systemd[1]: Stopped target remote-fs.target. Dec 13 14:27:19.530973 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:27:19.532807 systemd[1]: Stopped target sysinit.target. Dec 13 14:27:19.534622 systemd[1]: Stopped target local-fs.target. Dec 13 14:27:19.536739 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:27:19.538429 systemd[1]: Stopped target swap.target. Dec 13 14:27:19.540416 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:27:19.541859 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:27:19.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.543667 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:27:19.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.545103 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:27:19.545227 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:27:19.547934 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:27:19.552246 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:27:19.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.554366 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:27:19.562814 systemd[1]: Stopped ignition-files.service. Dec 13 14:27:19.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.565741 systemd[1]: Stopping ignition-mount.service... Dec 13 14:27:19.606782 ignition[1293]: INFO : Ignition 2.14.0 Dec 13 14:27:19.606782 ignition[1293]: INFO : Stage: umount Dec 13 14:27:19.606782 ignition[1293]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:27:19.606782 ignition[1293]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:27:19.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.600419 systemd[1]: Stopping iscsid.service... Dec 13 14:27:19.622353 iscsid[1104]: iscsid shutting down. Dec 13 14:27:19.607916 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:27:19.608136 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:27:19.611079 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:27:19.628728 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:27:19.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.642190 ignition[1293]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:27:19.642190 ignition[1293]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:27:19.629004 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:27:19.647164 ignition[1293]: INFO : PUT result: OK Dec 13 14:27:19.630206 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:27:19.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.652179 ignition[1293]: INFO : umount: umount passed Dec 13 14:27:19.652179 ignition[1293]: INFO : Ignition finished successfully Dec 13 14:27:19.630471 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:27:19.634827 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:27:19.635277 systemd[1]: Stopped iscsid.service. Dec 13 14:27:19.642683 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:27:19.644313 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:27:19.651564 systemd[1]: Stopping iscsiuio.service... Dec 13 14:27:19.658747 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:27:19.659608 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:27:19.659730 systemd[1]: Stopped ignition-mount.service. Dec 13 14:27:19.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.670009 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:27:19.670160 systemd[1]: Stopped iscsiuio.service. Dec 13 14:27:19.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.673079 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:27:19.674278 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:27:19.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.677087 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:27:19.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.677145 systemd[1]: Stopped ignition-disks.service. Dec 13 14:27:19.679487 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:27:19.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.679543 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:27:19.681720 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:27:19.681768 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:27:19.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.683916 systemd[1]: Stopped target network.target. Dec 13 14:27:19.685831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:27:19.686728 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:27:19.690345 systemd[1]: Stopped target paths.target. Dec 13 14:27:19.691169 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:27:19.695993 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:27:19.698746 systemd[1]: Stopped target slices.target. Dec 13 14:27:19.701141 systemd[1]: Stopped target sockets.target. Dec 13 14:27:19.704527 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:27:19.704583 systemd[1]: Closed iscsid.socket. Dec 13 14:27:19.706545 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:27:19.707183 systemd[1]: Closed iscsiuio.socket. Dec 13 14:27:19.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.708875 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:27:19.708949 systemd[1]: Stopped ignition-setup.service. Dec 13 14:27:19.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.710603 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:27:19.710648 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:27:19.714785 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:27:19.716628 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:27:19.718951 systemd-networkd[1099]: eth0: DHCPv6 lease lost Dec 13 14:27:19.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.720056 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:27:19.720181 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:27:19.723682 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:27:19.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.725000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:27:19.724589 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:27:19.728120 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:27:19.728177 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:27:19.728000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:27:19.732215 systemd[1]: Stopping network-cleanup.service... Dec 13 14:27:19.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.733236 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:27:19.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.733313 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:27:19.735394 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:19.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.735457 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:19.737528 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:27:19.737588 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:27:19.740979 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:27:19.753179 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:27:19.754446 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:27:19.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.757038 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:27:19.758057 systemd[1]: Stopped network-cleanup.service. Dec 13 14:27:19.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.759803 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:27:19.759861 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:27:19.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.761639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:27:19.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.761676 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:27:19.762661 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:27:19.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.762701 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:27:19.764118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:27:19.764157 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:27:19.765796 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:27:19.765828 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:27:19.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:19.767645 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:27:19.776699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:27:19.776776 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:27:19.779680 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:27:19.779832 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:27:19.785128 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:27:19.792137 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:27:19.810256 systemd[1]: Switching root. Dec 13 14:27:19.812000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:27:19.812000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:27:19.812000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:27:19.814000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:27:19.814000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:27:19.835461 systemd-journald[185]: Journal stopped Dec 13 14:27:27.839873 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:27:27.852739 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:27:27.852773 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:27:27.852798 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:27:27.852815 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:27:27.852832 kernel: SELinux: policy capability open_perms=1 Dec 13 14:27:27.852850 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:27:27.852866 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:27:27.852890 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:27:27.853110 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:27:27.853133 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:27:27.853149 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:27:27.853168 systemd[1]: Successfully loaded SELinux policy in 128.009ms. Dec 13 14:27:27.853212 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.679ms. Dec 13 14:27:27.853232 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:27:27.853250 systemd[1]: Detected virtualization amazon. Dec 13 14:27:27.853267 systemd[1]: Detected architecture x86-64. Dec 13 14:27:27.853287 systemd[1]: Detected first boot. Dec 13 14:27:27.853305 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:27:27.853323 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:27:27.853339 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:27:27.853361 kernel: audit: type=1400 audit(1734100041.640:86): avc: denied { associate } for pid=1343 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:27:27.853381 kernel: audit: type=1300 audit(1734100041.640:86): arch=c000003e syscall=188 success=yes exit=0 a0=c0001076c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=1326 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:27.853399 kernel: audit: type=1327 audit(1734100041.640:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:27.853425 kernel: audit: type=1400 audit(1734100041.649:87): avc: denied { associate } for pid=1343 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:27:27.853444 kernel: audit: type=1300 audit(1734100041.649:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107799 a2=1ed a3=0 items=2 ppid=1326 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:27.853461 kernel: audit: type=1307 audit(1734100041.649:87): cwd="/" Dec 13 14:27:27.853477 kernel: audit: type=1302 audit(1734100041.649:87): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.853493 kernel: audit: type=1302 audit(1734100041.649:87): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:27.853510 kernel: audit: type=1327 audit(1734100041.649:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:27:27.853529 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:27:27.853546 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:27.853564 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:27.853689 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:27.853716 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:27:27.853736 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:27:27.853982 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:27:27.854013 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:27:27.854031 systemd[1]: Created slice system-getty.slice. Dec 13 14:27:27.854049 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:27:27.854068 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:27:27.854085 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:27:27.854103 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:27:27.854127 systemd[1]: Created slice user.slice. Dec 13 14:27:27.854144 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:27:27.854162 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:27:27.854178 systemd[1]: Set up automount boot.automount. Dec 13 14:27:27.854196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:27:27.854212 systemd[1]: Reached target integritysetup.target. Dec 13 14:27:27.854230 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:27:27.854247 systemd[1]: Reached target remote-fs.target. Dec 13 14:27:27.854265 systemd[1]: Reached target slices.target. Dec 13 14:27:27.854284 systemd[1]: Reached target swap.target. Dec 13 14:27:27.854301 systemd[1]: Reached target torcx.target. Dec 13 14:27:27.854318 systemd[1]: Reached target veritysetup.target. Dec 13 14:27:27.854337 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:27:27.854354 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:27:27.854372 kernel: audit: type=1400 audit(1734100047.496:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:27.854392 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:27:27.854409 kernel: audit: type=1335 audit(1734100047.496:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:27:27.854427 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:27:27.854450 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:27:27.856492 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:27:27.856519 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:27:27.856537 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:27:27.856554 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:27:27.856571 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:27:27.856590 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:27:27.856608 systemd[1]: Mounting media.mount... Dec 13 14:27:27.856626 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:27.856655 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:27:27.856674 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:27:27.856692 systemd[1]: Mounting tmp.mount... Dec 13 14:27:27.856709 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:27:27.856726 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:27.856746 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:27:27.856763 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:27:27.856780 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:27.856798 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:27.856816 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:27.856833 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:27:27.856850 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:27.856867 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:27:27.856885 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:27:27.856919 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:27:27.856936 systemd[1]: Starting systemd-journald.service... Dec 13 14:27:27.856953 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:27:27.856971 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:27:27.856989 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:27:27.857005 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:27:27.857024 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:27.857041 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:27:27.857059 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:27:27.857079 systemd[1]: Mounted media.mount. Dec 13 14:27:27.857096 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:27:27.857114 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:27:27.857131 systemd[1]: Mounted tmp.mount. Dec 13 14:27:27.857148 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:27:27.857166 kernel: audit: type=1130 audit(1734100047.794:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857184 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:27:27.857202 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:27:27.857219 kernel: audit: type=1130 audit(1734100047.804:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:27.857257 kernel: audit: type=1131 audit(1734100047.804:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857273 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:27.857291 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:27.857309 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:27.857327 kernel: audit: type=1130 audit(1734100047.816:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:27.857366 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:27.857384 kernel: audit: type=1131 audit(1734100047.816:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857401 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:27:27.857419 kernel: audit: type=1130 audit(1734100047.825:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857436 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:27:27.857454 kernel: audit: type=1131 audit(1734100047.825:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857471 kernel: loop: module loaded Dec 13 14:27:27.857492 systemd[1]: Reached target network-pre.target. Dec 13 14:27:27.857512 kernel: audit: type=1130 audit(1734100047.829:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.857534 systemd-journald[1435]: Journal started Dec 13 14:27:27.857607 systemd-journald[1435]: Runtime Journal (/run/log/journal/ec25436fcf574512b8523448b1bca7a2) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:27:27.496000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:27:27.496000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:27:27.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.872453 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:27:27.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.836000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:27:27.836000 audit[1435]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffea3cf7510 a2=4000 a3=7ffea3cf75ac items=0 ppid=1 pid=1435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:27.836000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:27:27.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.877521 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:27:27.898066 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:27:27.898238 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:27.898275 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:27:27.898300 systemd[1]: Started systemd-journald.service. Dec 13 14:27:27.898320 kernel: fuse: init (API version 7.34) Dec 13 14:27:27.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.907314 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:27.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.907582 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:27.909297 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:27:27.910824 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:27:27.912706 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:27:27.913200 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:27:27.916571 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:27:27.923025 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:27:27.924178 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:27.930140 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:27.933663 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:27:27.951293 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:27:27.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:27.954811 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:27:27.962209 systemd-journald[1435]: Time spent on flushing to /var/log/journal/ec25436fcf574512b8523448b1bca7a2 is 77.932ms for 1147 entries. Dec 13 14:27:27.962209 systemd-journald[1435]: System Journal (/var/log/journal/ec25436fcf574512b8523448b1bca7a2) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:27:28.060541 systemd-journald[1435]: Received client request to flush runtime journal. Dec 13 14:27:28.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.001495 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:28.061895 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:27:28.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.071801 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:27:28.075393 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:27:28.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.077456 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:27:28.080674 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:27:28.109064 udevadm[1491]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:27:28.231496 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:27:28.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.238180 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:27:28.348770 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:27:28.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.874226 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:27:28.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:28.879994 systemd[1]: Starting systemd-udevd.service... Dec 13 14:27:28.931894 systemd-udevd[1498]: Using default interface naming scheme 'v252'. Dec 13 14:27:29.011788 systemd[1]: Started systemd-udevd.service. Dec 13 14:27:29.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.016442 systemd[1]: Starting systemd-networkd.service... Dec 13 14:27:29.031562 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:27:29.086853 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:27:29.098307 (udev-worker)[1506]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:29.157044 systemd[1]: Started systemd-userdbd.service. Dec 13 14:27:29.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.210941 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:27:29.214929 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:27:29.222976 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:27:29.223078 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:27:29.318000 audit[1499]: AVC avc: denied { confidentiality } for pid=1499 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:27:29.318000 audit[1499]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564106ae0c00 a1=337fc a2=7fb4b098fbc5 a3=5 items=110 ppid=1498 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:29.318000 audit: CWD cwd="/" Dec 13 14:27:29.318000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=1 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=2 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=3 name=(null) inode=15041 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=4 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=5 name=(null) inode=15042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=6 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=7 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=8 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=9 name=(null) inode=15044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=10 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=11 name=(null) inode=15045 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=12 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=13 name=(null) inode=15046 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=14 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=15 name=(null) inode=15047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=16 name=(null) inode=15043 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=17 name=(null) inode=15048 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=18 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=19 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=20 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=21 name=(null) inode=15050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=22 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=23 name=(null) inode=15051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=24 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=25 name=(null) inode=15052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=26 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=27 name=(null) inode=15053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=28 name=(null) inode=15049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=29 name=(null) inode=15054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=30 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=31 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=32 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=33 name=(null) inode=15056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=34 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=35 name=(null) inode=15057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=36 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=37 name=(null) inode=15058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=38 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=39 name=(null) inode=15059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=40 name=(null) inode=15055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=41 name=(null) inode=15060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=42 name=(null) inode=15040 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=43 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=44 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=45 name=(null) inode=15062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=46 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=47 name=(null) inode=15063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=48 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=49 name=(null) inode=15064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=50 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=51 name=(null) inode=15065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=52 name=(null) inode=15061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=53 name=(null) inode=15066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=55 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=56 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=57 name=(null) inode=15068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=58 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=59 name=(null) inode=15069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=60 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=61 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=62 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=63 name=(null) inode=15071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=64 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=65 name=(null) inode=15072 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=66 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=67 name=(null) inode=15073 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=68 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=69 name=(null) inode=15074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=70 name=(null) inode=15070 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=71 name=(null) inode=15075 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=72 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=73 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=74 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=75 name=(null) inode=15077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=76 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=77 name=(null) inode=15078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=78 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=79 name=(null) inode=15079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=80 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=81 name=(null) inode=15080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=82 name=(null) inode=15076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=83 name=(null) inode=15081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=84 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=85 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=86 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=87 name=(null) inode=15083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=88 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=89 name=(null) inode=15084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=90 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=91 name=(null) inode=15085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=92 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=93 name=(null) inode=15086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=94 name=(null) inode=15082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=95 name=(null) inode=15087 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=96 name=(null) inode=15067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=97 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=98 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=99 name=(null) inode=15089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=100 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=101 name=(null) inode=15090 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=102 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=103 name=(null) inode=15091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=104 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=105 name=(null) inode=15092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=106 name=(null) inode=15088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=107 name=(null) inode=15093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PATH item=109 name=(null) inode=15094 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:27:29.318000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:27:29.350964 systemd-networkd[1507]: lo: Link UP Dec 13 14:27:29.350976 systemd-networkd[1507]: lo: Gained carrier Dec 13 14:27:29.351579 systemd-networkd[1507]: Enumeration completed Dec 13 14:27:29.351749 systemd[1]: Started systemd-networkd.service. Dec 13 14:27:29.353128 systemd-networkd[1507]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:27:29.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.357559 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:27:29.363118 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:27:29.363459 systemd-networkd[1507]: eth0: Link UP Dec 13 14:27:29.363692 systemd-networkd[1507]: eth0: Gained carrier Dec 13 14:27:29.386098 systemd-networkd[1507]: eth0: DHCPv4 address 172.31.23.203/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:27:29.390954 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:27:29.425957 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:27:29.440934 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:27:29.519184 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1501) Dec 13 14:27:29.670057 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 14:27:29.717666 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:27:29.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.720869 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:27:29.806948 lvm[1613]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:29.838785 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:27:29.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.840928 systemd[1]: Reached target cryptsetup.target. Dec 13 14:27:29.844219 systemd[1]: Starting lvm2-activation.service... Dec 13 14:27:29.852029 lvm[1615]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:27:29.875333 systemd[1]: Finished lvm2-activation.service. Dec 13 14:27:29.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:29.876652 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:27:29.878163 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:27:29.878191 systemd[1]: Reached target local-fs.target. Dec 13 14:27:29.879096 systemd[1]: Reached target machines.target. Dec 13 14:27:29.885313 systemd[1]: Starting ldconfig.service... Dec 13 14:27:29.887424 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:29.887509 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:29.888861 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:27:29.891450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:27:29.894551 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:27:29.897920 systemd[1]: Starting systemd-sysext.service... Dec 13 14:27:29.937256 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1618 (bootctl) Dec 13 14:27:29.940639 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:27:29.961688 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:27:29.971068 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:27:29.971412 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:27:29.974181 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:27:29.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.006933 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:27:30.209458 systemd-fsck[1631]: fsck.fat 4.2 (2021-01-31) Dec 13 14:27:30.209458 systemd-fsck[1631]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:27:30.212258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:27:30.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.216918 systemd[1]: Mounting boot.mount... Dec 13 14:27:30.247985 systemd[1]: Mounted boot.mount. Dec 13 14:27:30.278944 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:27:30.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.442928 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:27:30.470933 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:27:30.497588 (sd-sysext)[1650]: Using extensions 'kubernetes'. Dec 13 14:27:30.498197 (sd-sysext)[1650]: Merged extensions into '/usr'. Dec 13 14:27:30.549471 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:30.553998 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:27:30.560034 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:30.568754 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:30.589614 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:30.605338 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:30.608547 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:30.609191 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:30.609667 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:30.616961 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:27:30.619538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:30.620118 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:30.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.624771 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:30.625638 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:30.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.631492 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:30.633248 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:30.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.638069 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:30.638190 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:30.640742 systemd[1]: Finished systemd-sysext.service. Dec 13 14:27:30.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:30.644387 systemd[1]: Starting ensure-sysext.service... Dec 13 14:27:30.648797 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:27:30.676566 systemd[1]: Reloading. Dec 13 14:27:30.692822 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:27:30.697966 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:27:30.706489 systemd-tmpfiles[1664]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:27:30.795749 /usr/lib/systemd/system-generators/torcx-generator[1686]: time="2024-12-13T14:27:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:30.795790 /usr/lib/systemd/system-generators/torcx-generator[1686]: time="2024-12-13T14:27:30Z" level=info msg="torcx already run" Dec 13 14:27:31.077893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:31.077943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:31.112318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:31.208018 systemd-networkd[1507]: eth0: Gained IPv6LL Dec 13 14:27:31.227241 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:27:31.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.232625 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:27:31.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.249496 systemd[1]: Starting audit-rules.service... Dec 13 14:27:31.266217 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:27:31.270148 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:27:31.280639 systemd[1]: Starting systemd-resolved.service... Dec 13 14:27:31.288048 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:27:31.291800 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:27:31.305320 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:27:31.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.307532 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:31.314630 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.315096 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.318472 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:31.322249 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:31.326287 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:31.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.328128 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.328326 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:31.328489 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:31.328600 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.330200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:31.330561 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:31.332482 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:31.332727 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:31.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.333000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.335654 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:31.341996 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:31.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.345168 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:31.349071 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.349548 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.354765 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:27:31.360655 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:27:31.361753 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.362001 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:31.362170 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:31.362280 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.371040 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.373394 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.378101 systemd[1]: Starting modprobe@drm.service... Dec 13 14:27:31.380000 audit[1756]: SYSTEM_BOOT pid=1756 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.385417 systemd[1]: Starting modprobe@loop.service... Dec 13 14:27:31.386847 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.387073 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:31.387298 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:27:31.388040 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:27:31.395321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:27:31.395722 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:27:31.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.397718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:27:31.397970 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:27:31.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.402602 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:27:31.407291 systemd[1]: Finished ensure-sysext.service. Dec 13 14:27:31.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.409032 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:27:31.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.417445 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:27:31.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.417702 systemd[1]: Finished modprobe@drm.service. Dec 13 14:27:31.437959 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:27:31.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.439662 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:27:31.439884 systemd[1]: Finished modprobe@loop.service. Dec 13 14:27:31.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:27:31.441174 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:27:31.495466 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:27:31.494000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:27:31.494000 audit[1785]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc6fc20050 a2=420 a3=0 items=0 ppid=1745 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:27:31.494000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:27:31.498309 augenrules[1785]: No rules Dec 13 14:27:31.496563 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:27:31.498813 systemd[1]: Finished audit-rules.service. Dec 13 14:27:31.546162 systemd-resolved[1749]: Positive Trust Anchors: Dec 13 14:27:31.546686 systemd-resolved[1749]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:27:31.546806 systemd-resolved[1749]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:27:31.559974 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:27:31.561289 systemd[1]: Reached target time-set.target. Dec 13 14:27:31.578946 systemd-resolved[1749]: Defaulting to hostname 'linux'. Dec 13 14:27:31.580829 systemd[1]: Started systemd-resolved.service. Dec 13 14:27:31.582820 systemd[1]: Reached target network.target. Dec 13 14:27:31.584423 systemd[1]: Reached target network-online.target. Dec 13 14:27:31.586009 systemd[1]: Reached target nss-lookup.target. Dec 13 14:27:32.164515 systemd-timesyncd[1752]: Contacted time server 173.255.255.133:123 (0.flatcar.pool.ntp.org). Dec 13 14:27:32.164520 systemd-resolved[1749]: Clock change detected. Flushing caches. Dec 13 14:27:32.165129 systemd-timesyncd[1752]: Initial clock synchronization to Fri 2024-12-13 14:27:32.164357 UTC. Dec 13 14:27:32.187259 ldconfig[1617]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:27:32.199372 systemd[1]: Finished ldconfig.service. Dec 13 14:27:32.203410 systemd[1]: Starting systemd-update-done.service... Dec 13 14:27:32.215015 systemd[1]: Finished systemd-update-done.service. Dec 13 14:27:32.217021 systemd[1]: Reached target sysinit.target. Dec 13 14:27:32.218360 systemd[1]: Started motdgen.path. Dec 13 14:27:32.219635 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:27:32.221323 systemd[1]: Started logrotate.timer. Dec 13 14:27:32.222490 systemd[1]: Started mdadm.timer. Dec 13 14:27:32.223420 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:27:32.224967 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:27:32.225007 systemd[1]: Reached target paths.target. Dec 13 14:27:32.226251 systemd[1]: Reached target timers.target. Dec 13 14:27:32.227662 systemd[1]: Listening on dbus.socket. Dec 13 14:27:32.230519 systemd[1]: Starting docker.socket... Dec 13 14:27:32.233957 systemd[1]: Listening on sshd.socket. Dec 13 14:27:32.235065 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:32.235912 systemd[1]: Listening on docker.socket. Dec 13 14:27:32.237131 systemd[1]: Reached target sockets.target. Dec 13 14:27:32.238050 systemd[1]: Reached target basic.target. Dec 13 14:27:32.239077 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:27:32.239137 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:32.239171 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:27:32.241672 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:27:32.245937 systemd[1]: Starting containerd.service... Dec 13 14:27:32.249097 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:27:32.252550 systemd[1]: Starting dbus.service... Dec 13 14:27:32.255013 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:27:32.258485 systemd[1]: Starting extend-filesystems.service... Dec 13 14:27:32.259537 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:27:32.262973 systemd[1]: Starting kubelet.service... Dec 13 14:27:32.275966 systemd[1]: Starting motdgen.service... Dec 13 14:27:32.279586 systemd[1]: Started nvidia.service. Dec 13 14:27:32.287585 systemd[1]: Starting prepare-helm.service... Dec 13 14:27:32.349422 jq[1804]: false Dec 13 14:27:32.309187 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:27:32.318122 systemd[1]: Starting sshd-keygen.service... Dec 13 14:27:32.323455 systemd[1]: Starting systemd-logind.service... Dec 13 14:27:32.324733 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:27:32.324883 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:27:32.327720 systemd[1]: Starting update-engine.service... Dec 13 14:27:32.333896 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:27:32.359867 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:27:32.482136 jq[1819]: true Dec 13 14:27:32.360681 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:27:32.500688 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:27:32.501134 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:27:32.515311 tar[1824]: linux-amd64/helm Dec 13 14:27:32.597798 jq[1836]: true Dec 13 14:27:32.603596 dbus-daemon[1803]: [system] SELinux support is enabled Dec 13 14:27:32.605218 systemd[1]: Started dbus.service. Dec 13 14:27:32.611516 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:27:32.611548 systemd[1]: Reached target system-config.target. Dec 13 14:27:32.613078 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:27:32.613107 systemd[1]: Reached target user-config.target. Dec 13 14:27:32.625105 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:27:32.625422 systemd[1]: Finished motdgen.service. Dec 13 14:27:32.649279 extend-filesystems[1805]: Found loop1 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p1 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p2 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p3 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found usr Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p4 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p6 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p7 Dec 13 14:27:32.649279 extend-filesystems[1805]: Found nvme0n1p9 Dec 13 14:27:32.649279 extend-filesystems[1805]: Checking size of /dev/nvme0n1p9 Dec 13 14:27:32.689436 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1507 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:32.696127 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:27:32.741512 amazon-ssm-agent[1799]: 2024/12/13 14:27:32 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:27:32.742367 extend-filesystems[1805]: Resized partition /dev/nvme0n1p9 Dec 13 14:27:32.746636 amazon-ssm-agent[1799]: Initializing new seelog logger Dec 13 14:27:32.748385 amazon-ssm-agent[1799]: New Seelog Logger Creation Complete Dec 13 14:27:32.749363 amazon-ssm-agent[1799]: 2024/12/13 14:27:32 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:27:32.749784 amazon-ssm-agent[1799]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:27:32.750177 amazon-ssm-agent[1799]: 2024/12/13 14:27:32 processing appconfig overrides Dec 13 14:27:32.767476 extend-filesystems[1879]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:27:32.782009 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:27:32.786882 bash[1875]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:32.788077 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:27:32.858629 env[1828]: time="2024-12-13T14:27:32.858536996Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:27:32.871308 update_engine[1815]: I1213 14:27:32.870468 1815 main.cc:92] Flatcar Update Engine starting Dec 13 14:27:32.876702 systemd[1]: Started update-engine.service. Dec 13 14:27:32.880942 systemd[1]: Started locksmithd.service. Dec 13 14:27:32.883036 update_engine[1815]: I1213 14:27:32.882875 1815 update_check_scheduler.cc:74] Next update check in 9m0s Dec 13 14:27:32.974024 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:27:33.041339 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:27:33.054495 extend-filesystems[1879]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:27:33.054495 extend-filesystems[1879]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:27:33.054495 extend-filesystems[1879]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:27:33.073202 extend-filesystems[1805]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:27:33.056855 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:27:33.057275 systemd[1]: Finished extend-filesystems.service. Dec 13 14:27:33.179036 systemd-logind[1813]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:27:33.179069 systemd-logind[1813]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:27:33.179094 systemd-logind[1813]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:27:33.183322 systemd-logind[1813]: New seat seat0. Dec 13 14:27:33.187740 env[1828]: time="2024-12-13T14:27:33.187650831Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:27:33.189311 env[1828]: time="2024-12-13T14:27:33.188072026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.192396 systemd[1]: Started systemd-logind.service. Dec 13 14:27:33.206382 env[1828]: time="2024-12-13T14:27:33.206318341Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:33.206382 env[1828]: time="2024-12-13T14:27:33.206379658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.206814 env[1828]: time="2024-12-13T14:27:33.206773777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:33.206814 env[1828]: time="2024-12-13T14:27:33.206810483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.207095 env[1828]: time="2024-12-13T14:27:33.206903335Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:27:33.207095 env[1828]: time="2024-12-13T14:27:33.206923111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.207095 env[1828]: time="2024-12-13T14:27:33.207056219Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.207458 env[1828]: time="2024-12-13T14:27:33.207430647Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:27:33.207929 env[1828]: time="2024-12-13T14:27:33.207887558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:27:33.208200 env[1828]: time="2024-12-13T14:27:33.207928766Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:27:33.208200 env[1828]: time="2024-12-13T14:27:33.208037524Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:27:33.208320 env[1828]: time="2024-12-13T14:27:33.208193821Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:27:33.228499 env[1828]: time="2024-12-13T14:27:33.228026089Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:27:33.228499 env[1828]: time="2024-12-13T14:27:33.228372419Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:27:33.228499 env[1828]: time="2024-12-13T14:27:33.228401212Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:27:33.228499 env[1828]: time="2024-12-13T14:27:33.228481867Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228521076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228549211Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228567402Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228585932Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228603350Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228625199Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228644288Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.228761 env[1828]: time="2024-12-13T14:27:33.228664240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:27:33.229481 env[1828]: time="2024-12-13T14:27:33.228940057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:27:33.229481 env[1828]: time="2024-12-13T14:27:33.229290738Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:27:33.229838 env[1828]: time="2024-12-13T14:27:33.229805980Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:27:33.229909 env[1828]: time="2024-12-13T14:27:33.229854604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.229909 env[1828]: time="2024-12-13T14:27:33.229876981Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:27:33.230007 env[1828]: time="2024-12-13T14:27:33.229961518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230007 env[1828]: time="2024-12-13T14:27:33.229983313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230096 env[1828]: time="2024-12-13T14:27:33.230017351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230096 env[1828]: time="2024-12-13T14:27:33.230036029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230096 env[1828]: time="2024-12-13T14:27:33.230056693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230096 env[1828]: time="2024-12-13T14:27:33.230075852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230308 env[1828]: time="2024-12-13T14:27:33.230096181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230308 env[1828]: time="2024-12-13T14:27:33.230115205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.230308 env[1828]: time="2024-12-13T14:27:33.230137859Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.230306429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.230328947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.230348956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.230368022Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.231425825Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:27:33.231586 env[1828]: time="2024-12-13T14:27:33.231457163Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:27:33.231838 env[1828]: time="2024-12-13T14:27:33.231613623Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:27:33.231838 env[1828]: time="2024-12-13T14:27:33.231697133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:27:33.232308 env[1828]: time="2024-12-13T14:27:33.232205263Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.232408740Z" level=info msg="Connect containerd service" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.232477636Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.233586863Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234465110Z" level=info msg="Start subscribing containerd event" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234526636Z" level=info msg="Start recovering state" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234608262Z" level=info msg="Start event monitor" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234630824Z" level=info msg="Start snapshots syncer" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234644558Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:27:33.237088 env[1828]: time="2024-12-13T14:27:33.234655469Z" level=info msg="Start streaming server" Dec 13 14:27:33.238802 env[1828]: time="2024-12-13T14:27:33.238769592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:27:33.238963 env[1828]: time="2024-12-13T14:27:33.238929115Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:27:33.239286 env[1828]: time="2024-12-13T14:27:33.239047797Z" level=info msg="containerd successfully booted in 0.383017s" Dec 13 14:27:33.239174 systemd[1]: Started containerd.service. Dec 13 14:27:33.320609 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:27:33.320797 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:27:33.323904 dbus-daemon[1803]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1871 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:27:33.331581 systemd[1]: Starting polkit.service... Dec 13 14:27:33.374422 polkitd[1926]: Started polkitd version 121 Dec 13 14:27:33.408951 polkitd[1926]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:27:33.409730 polkitd[1926]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:27:33.418472 polkitd[1926]: Finished loading, compiling and executing 2 rules Dec 13 14:27:33.419459 dbus-daemon[1803]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:27:33.419737 systemd[1]: Started polkit.service. Dec 13 14:27:33.423009 polkitd[1926]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:27:33.453015 systemd-hostnamed[1871]: Hostname set to (transient) Dec 13 14:27:33.453140 systemd-resolved[1749]: System hostname changed to 'ip-172-31-23-203'. Dec 13 14:27:33.546163 coreos-metadata[1801]: Dec 13 14:27:33.537 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:27:33.551131 coreos-metadata[1801]: Dec 13 14:27:33.551 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:27:33.553045 coreos-metadata[1801]: Dec 13 14:27:33.552 INFO Fetch successful Dec 13 14:27:33.553045 coreos-metadata[1801]: Dec 13 14:27:33.552 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:27:33.554066 coreos-metadata[1801]: Dec 13 14:27:33.554 INFO Fetch successful Dec 13 14:27:33.558105 unknown[1801]: wrote ssh authorized keys file for user: core Dec 13 14:27:33.607295 update-ssh-keys[1951]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:27:33.608718 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:27:33.836302 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Create new startup processor Dec 13 14:27:33.837056 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:27:33.837160 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing bookkeeping folders Dec 13 14:27:33.837220 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO removing the completed state files Dec 13 14:27:33.837276 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:27:33.837349 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:27:33.837412 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing healthcheck folders for long running plugins Dec 13 14:27:33.837486 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing locations for inventory plugin Dec 13 14:27:33.837590 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing default location for custom inventory Dec 13 14:27:33.837652 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing default location for file inventory Dec 13 14:27:33.837710 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Initializing default location for role inventory Dec 13 14:27:33.837771 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Init the cloudwatchlogs publisher Dec 13 14:27:33.837927 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:27:33.838070 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:27:33.838134 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:27:33.838193 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:27:33.838251 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:27:33.838313 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:27:33.838372 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:27:33.838430 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:27:33.838492 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:27:33.838551 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:27:33.838610 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:27:33.838670 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO OS: linux, Arch: amd64 Dec 13 14:27:33.839827 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:27:33.845197 amazon-ssm-agent[1799]: datastore file /var/lib/amazon/ssm/i-063acc8dceae67f8b/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:27:33.957305 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:27:34.052417 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:27:34.148003 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:27:34.242682 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:27:34.338007 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [instanceID=i-063acc8dceae67f8b] Starting association polling Dec 13 14:27:34.433163 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:27:34.495985 tar[1824]: linux-amd64/LICENSE Dec 13 14:27:34.496465 tar[1824]: linux-amd64/README.md Dec 13 14:27:34.509164 systemd[1]: Finished prepare-helm.service. Dec 13 14:27:34.528436 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:27:34.624011 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:27:34.694648 sshd_keygen[1850]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:27:34.720392 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:27:34.730561 systemd[1]: Finished sshd-keygen.service. Dec 13 14:27:34.734466 systemd[1]: Starting issuegen.service... Dec 13 14:27:34.749455 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:27:34.750022 systemd[1]: Finished issuegen.service. Dec 13 14:27:34.755486 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:27:34.775587 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:27:34.779855 systemd[1]: Started getty@tty1.service. Dec 13 14:27:34.786371 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:27:34.788622 systemd[1]: Reached target getty.target. Dec 13 14:27:34.814336 locksmithd[1886]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:27:34.816910 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:27:34.912604 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:27:35.009162 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:27:35.081181 systemd[1]: Started kubelet.service. Dec 13 14:27:35.083132 systemd[1]: Reached target multi-user.target. Dec 13 14:27:35.088904 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:27:35.104675 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:27:35.105133 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:27:35.116804 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:27:35.119790 systemd[1]: Startup finished in 10.005s (kernel) + 13.708s (userspace) = 23.714s. Dec 13 14:27:35.213691 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:27:35.310618 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-063acc8dceae67f8b, requestId: 56c8f953-7f78-4601-90da-059a8a0b02c6 Dec 13 14:27:35.407560 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [OfflineService] Starting document processing engine... Dec 13 14:27:35.504944 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:27:35.602677 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:27:35.700129 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [OfflineService] Starting message polling Dec 13 14:27:35.797974 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [OfflineService] Starting send replies to MDS Dec 13 14:27:35.896785 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] listening reply. Dec 13 14:27:35.994591 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:27:36.093439 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:27:36.192105 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:27:36.208895 kubelet[2052]: E1213 14:27:36.208807 2052 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:36.216422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:36.217565 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:36.291016 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:27:36.390118 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:27:36.489411 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:27:36.588797 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:27:36.688588 amazon-ssm-agent[1799]: 2024-12-13 14:27:33 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-063acc8dceae67f8b?role=subscribe&stream=input Dec 13 14:27:36.788666 amazon-ssm-agent[1799]: 2024-12-13 14:27:34 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-063acc8dceae67f8b?role=subscribe&stream=input Dec 13 14:27:36.888670 amazon-ssm-agent[1799]: 2024-12-13 14:27:34 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:27:36.988911 amazon-ssm-agent[1799]: 2024-12-13 14:27:34 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:27:37.484664 amazon-ssm-agent[1799]: 2024-12-13 14:27:37 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:27:40.917821 systemd[1]: Created slice system-sshd.slice. Dec 13 14:27:40.921335 systemd[1]: Started sshd@0-172.31.23.203:22-139.178.89.65:41082.service. Dec 13 14:27:41.149650 sshd[2061]: Accepted publickey for core from 139.178.89.65 port 41082 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:41.152555 sshd[2061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:41.168463 systemd[1]: Created slice user-500.slice. Dec 13 14:27:41.170392 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:27:41.173361 systemd-logind[1813]: New session 1 of user core. Dec 13 14:27:41.197785 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:27:41.205501 systemd[1]: Starting user@500.service... Dec 13 14:27:41.214184 (systemd)[2066]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:41.329241 systemd[2066]: Queued start job for default target default.target. Dec 13 14:27:41.329565 systemd[2066]: Reached target paths.target. Dec 13 14:27:41.329590 systemd[2066]: Reached target sockets.target. Dec 13 14:27:41.329607 systemd[2066]: Reached target timers.target. Dec 13 14:27:41.329625 systemd[2066]: Reached target basic.target. Dec 13 14:27:41.329966 systemd[1]: Started user@500.service. Dec 13 14:27:41.331322 systemd[1]: Started session-1.scope. Dec 13 14:27:41.331625 systemd[2066]: Reached target default.target. Dec 13 14:27:41.331858 systemd[2066]: Startup finished in 104ms. Dec 13 14:27:41.479004 systemd[1]: Started sshd@1-172.31.23.203:22-139.178.89.65:41088.service. Dec 13 14:27:41.653635 sshd[2075]: Accepted publickey for core from 139.178.89.65 port 41088 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:41.655323 sshd[2075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:41.670062 systemd-logind[1813]: New session 2 of user core. Dec 13 14:27:41.671206 systemd[1]: Started session-2.scope. Dec 13 14:27:41.805653 sshd[2075]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:41.809798 systemd[1]: sshd@1-172.31.23.203:22-139.178.89.65:41088.service: Deactivated successfully. Dec 13 14:27:41.811252 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:27:41.811270 systemd-logind[1813]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:27:41.812784 systemd-logind[1813]: Removed session 2. Dec 13 14:27:41.831356 systemd[1]: Started sshd@2-172.31.23.203:22-139.178.89.65:41090.service. Dec 13 14:27:41.997669 sshd[2082]: Accepted publickey for core from 139.178.89.65 port 41090 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:42.002782 sshd[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:42.024065 systemd-logind[1813]: New session 3 of user core. Dec 13 14:27:42.025013 systemd[1]: Started session-3.scope. Dec 13 14:27:42.157574 sshd[2082]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:42.161125 systemd[1]: sshd@2-172.31.23.203:22-139.178.89.65:41090.service: Deactivated successfully. Dec 13 14:27:42.162289 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:27:42.164017 systemd-logind[1813]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:27:42.165774 systemd-logind[1813]: Removed session 3. Dec 13 14:27:42.180172 systemd[1]: Started sshd@3-172.31.23.203:22-139.178.89.65:41096.service. Dec 13 14:27:42.336901 sshd[2089]: Accepted publickey for core from 139.178.89.65 port 41096 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:42.338676 sshd[2089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:42.345765 systemd[1]: Started session-4.scope. Dec 13 14:27:42.346206 systemd-logind[1813]: New session 4 of user core. Dec 13 14:27:42.469696 sshd[2089]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:42.474418 systemd[1]: sshd@3-172.31.23.203:22-139.178.89.65:41096.service: Deactivated successfully. Dec 13 14:27:42.475834 systemd-logind[1813]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:27:42.475933 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:27:42.477500 systemd-logind[1813]: Removed session 4. Dec 13 14:27:42.495428 systemd[1]: Started sshd@4-172.31.23.203:22-139.178.89.65:41102.service. Dec 13 14:27:42.663796 sshd[2096]: Accepted publickey for core from 139.178.89.65 port 41102 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:27:42.667318 sshd[2096]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:42.674198 systemd[1]: Started session-5.scope. Dec 13 14:27:42.674521 systemd-logind[1813]: New session 5 of user core. Dec 13 14:27:42.843312 sudo[2100]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:27:42.844341 sudo[2100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:27:42.874099 systemd[1]: Starting docker.service... Dec 13 14:27:42.939696 env[2110]: time="2024-12-13T14:27:42.939655296Z" level=info msg="Starting up" Dec 13 14:27:42.941379 env[2110]: time="2024-12-13T14:27:42.941342262Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:27:42.941379 env[2110]: time="2024-12-13T14:27:42.941366638Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:27:42.941534 env[2110]: time="2024-12-13T14:27:42.941391046Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:27:42.941534 env[2110]: time="2024-12-13T14:27:42.941405062Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:27:42.943590 env[2110]: time="2024-12-13T14:27:42.943554001Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:27:42.943590 env[2110]: time="2024-12-13T14:27:42.943578005Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:27:42.943817 env[2110]: time="2024-12-13T14:27:42.943599406Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:27:42.943817 env[2110]: time="2024-12-13T14:27:42.943613129Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:27:42.953324 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3352719098-merged.mount: Deactivated successfully. Dec 13 14:27:44.292654 env[2110]: time="2024-12-13T14:27:44.292612849Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:27:44.292654 env[2110]: time="2024-12-13T14:27:44.292640296Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:27:44.293284 env[2110]: time="2024-12-13T14:27:44.292880819Z" level=info msg="Loading containers: start." Dec 13 14:27:44.535217 kernel: Initializing XFRM netlink socket Dec 13 14:27:44.663226 env[2110]: time="2024-12-13T14:27:44.663178485Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:27:44.664528 (udev-worker)[2120]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:27:44.801450 systemd-networkd[1507]: docker0: Link UP Dec 13 14:27:44.826852 env[2110]: time="2024-12-13T14:27:44.826811780Z" level=info msg="Loading containers: done." Dec 13 14:27:44.867755 env[2110]: time="2024-12-13T14:27:44.867701692Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:27:44.868010 env[2110]: time="2024-12-13T14:27:44.867938285Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:27:44.868127 env[2110]: time="2024-12-13T14:27:44.868099796Z" level=info msg="Daemon has completed initialization" Dec 13 14:27:44.900110 systemd[1]: Started docker.service. Dec 13 14:27:44.912387 env[2110]: time="2024-12-13T14:27:44.912298861Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:27:46.356880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:27:46.357267 systemd[1]: Stopped kubelet.service. Dec 13 14:27:46.360178 systemd[1]: Starting kubelet.service... Dec 13 14:27:46.519365 env[1828]: time="2024-12-13T14:27:46.519314323Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:27:46.994197 systemd[1]: Started kubelet.service. Dec 13 14:27:47.099619 kubelet[2245]: E1213 14:27:47.099567 2245 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:47.107623 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:47.107845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:47.405467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4276979259.mount: Deactivated successfully. Dec 13 14:27:50.285322 env[1828]: time="2024-12-13T14:27:50.285226062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:50.293305 env[1828]: time="2024-12-13T14:27:50.293105999Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:50.297251 env[1828]: time="2024-12-13T14:27:50.297210375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:50.304773 env[1828]: time="2024-12-13T14:27:50.304720127Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:50.305766 env[1828]: time="2024-12-13T14:27:50.305722270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:27:50.323416 env[1828]: time="2024-12-13T14:27:50.323374253Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:27:53.845317 env[1828]: time="2024-12-13T14:27:53.845260261Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.849898 env[1828]: time="2024-12-13T14:27:53.849850123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.856057 env[1828]: time="2024-12-13T14:27:53.856009724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.859251 env[1828]: time="2024-12-13T14:27:53.859206246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:53.860571 env[1828]: time="2024-12-13T14:27:53.860523265Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:27:53.901343 env[1828]: time="2024-12-13T14:27:53.901299832Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:27:56.354343 env[1828]: time="2024-12-13T14:27:56.354281062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:56.429318 env[1828]: time="2024-12-13T14:27:56.429270662Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:56.479398 env[1828]: time="2024-12-13T14:27:56.479347891Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:56.520768 env[1828]: time="2024-12-13T14:27:56.520721666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:56.524807 env[1828]: time="2024-12-13T14:27:56.524756223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:27:56.563250 env[1828]: time="2024-12-13T14:27:56.562416020Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:27:57.357029 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:27:57.357272 systemd[1]: Stopped kubelet.service. Dec 13 14:27:57.359318 systemd[1]: Starting kubelet.service... Dec 13 14:27:58.130549 systemd[1]: Started kubelet.service. Dec 13 14:27:58.238899 kubelet[2275]: E1213 14:27:58.238845 2275 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:27:58.243874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:27:58.244545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:27:58.283951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29253787.mount: Deactivated successfully. Dec 13 14:27:59.239848 env[1828]: time="2024-12-13T14:27:59.239625903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:59.243878 env[1828]: time="2024-12-13T14:27:59.243829409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:59.247912 env[1828]: time="2024-12-13T14:27:59.247864077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:59.256724 env[1828]: time="2024-12-13T14:27:59.256672625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:59.257644 env[1828]: time="2024-12-13T14:27:59.257552885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:27:59.293041 env[1828]: time="2024-12-13T14:27:59.292835771Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:27:59.908597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535881318.mount: Deactivated successfully. Dec 13 14:28:01.943145 env[1828]: time="2024-12-13T14:28:01.943068344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.982305 env[1828]: time="2024-12-13T14:28:01.982252327Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.986120 env[1828]: time="2024-12-13T14:28:01.986068449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.989903 env[1828]: time="2024-12-13T14:28:01.989851663Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:01.994132 env[1828]: time="2024-12-13T14:28:01.994022757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:28:02.048200 env[1828]: time="2024-12-13T14:28:02.047550056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:28:02.677600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441104017.mount: Deactivated successfully. Dec 13 14:28:02.694104 env[1828]: time="2024-12-13T14:28:02.694043142Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.699203 env[1828]: time="2024-12-13T14:28:02.699152074Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.702946 env[1828]: time="2024-12-13T14:28:02.702897759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.707214 env[1828]: time="2024-12-13T14:28:02.707112229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:02.709738 env[1828]: time="2024-12-13T14:28:02.709633881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:28:02.760452 env[1828]: time="2024-12-13T14:28:02.760365833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:28:03.454789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434657622.mount: Deactivated successfully. Dec 13 14:28:03.488130 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:28:07.518895 amazon-ssm-agent[1799]: 2024-12-13 14:28:07 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:28:07.693182 env[1828]: time="2024-12-13T14:28:07.693121675Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:07.702738 env[1828]: time="2024-12-13T14:28:07.702685324Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:07.707346 env[1828]: time="2024-12-13T14:28:07.707302405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:07.712587 env[1828]: time="2024-12-13T14:28:07.712538269Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:07.713500 env[1828]: time="2024-12-13T14:28:07.713457445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:28:08.281002 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:28:08.281886 systemd[1]: Stopped kubelet.service. Dec 13 14:28:08.286317 systemd[1]: Starting kubelet.service... Dec 13 14:28:10.521950 systemd[1]: Started kubelet.service. Dec 13 14:28:10.743430 kubelet[2377]: E1213 14:28:10.743371 2377 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:28:10.746515 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:28:10.746823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:28:13.159728 systemd[1]: Stopped kubelet.service. Dec 13 14:28:13.167945 systemd[1]: Starting kubelet.service... Dec 13 14:28:13.208801 systemd[1]: Reloading. Dec 13 14:28:13.362667 /usr/lib/systemd/system-generators/torcx-generator[2411]: time="2024-12-13T14:28:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:13.362705 /usr/lib/systemd/system-generators/torcx-generator[2411]: time="2024-12-13T14:28:13Z" level=info msg="torcx already run" Dec 13 14:28:13.590509 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:13.590524 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:13.653285 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:13.774596 systemd[1]: Started kubelet.service. Dec 13 14:28:13.778835 systemd[1]: Stopping kubelet.service... Dec 13 14:28:13.780203 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:28:13.780553 systemd[1]: Stopped kubelet.service. Dec 13 14:28:13.783497 systemd[1]: Starting kubelet.service... Dec 13 14:28:14.649744 systemd[1]: Started kubelet.service. Dec 13 14:28:14.741106 kubelet[2484]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:14.741106 kubelet[2484]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:28:14.741106 kubelet[2484]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:14.748510 kubelet[2484]: I1213 14:28:14.748373 2484 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:28:15.273923 kubelet[2484]: I1213 14:28:15.273843 2484 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:28:15.273923 kubelet[2484]: I1213 14:28:15.273912 2484 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:28:15.274374 kubelet[2484]: I1213 14:28:15.274351 2484 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:28:15.343704 kubelet[2484]: E1213 14:28:15.343673 2484 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.346096 kubelet[2484]: I1213 14:28:15.346064 2484 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:28:15.362875 kubelet[2484]: I1213 14:28:15.362845 2484 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:28:15.366499 kubelet[2484]: I1213 14:28:15.366420 2484 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:28:15.366796 kubelet[2484]: I1213 14:28:15.366766 2484 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:28:15.366952 kubelet[2484]: I1213 14:28:15.366802 2484 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:28:15.366952 kubelet[2484]: I1213 14:28:15.366818 2484 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:28:15.367149 kubelet[2484]: I1213 14:28:15.366963 2484 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:15.368257 kubelet[2484]: W1213 14:28:15.368212 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-203&limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.368447 kubelet[2484]: E1213 14:28:15.368266 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-203&limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.368447 kubelet[2484]: I1213 14:28:15.368305 2484 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:28:15.368447 kubelet[2484]: I1213 14:28:15.368435 2484 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:28:15.368627 kubelet[2484]: I1213 14:28:15.368473 2484 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:28:15.368627 kubelet[2484]: I1213 14:28:15.368495 2484 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:28:15.371177 kubelet[2484]: I1213 14:28:15.371153 2484 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:28:15.391528 kubelet[2484]: W1213 14:28:15.391459 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.391528 kubelet[2484]: E1213 14:28:15.391537 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.394461 kubelet[2484]: I1213 14:28:15.394391 2484 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:28:15.394860 kubelet[2484]: W1213 14:28:15.394845 2484 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:28:15.396685 kubelet[2484]: I1213 14:28:15.396610 2484 server.go:1256] "Started kubelet" Dec 13 14:28:15.397248 kubelet[2484]: I1213 14:28:15.397222 2484 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:28:15.398869 kubelet[2484]: I1213 14:28:15.398360 2484 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:28:15.408812 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:28:15.409077 kubelet[2484]: I1213 14:28:15.408986 2484 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:28:15.409929 kubelet[2484]: I1213 14:28:15.409897 2484 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:28:15.410414 kubelet[2484]: I1213 14:28:15.410336 2484 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:28:15.417859 kubelet[2484]: E1213 14:28:15.417753 2484 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.203:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-203.1810c2d95f157f1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-203,UID:ip-172-31-23-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-203,},FirstTimestamp:2024-12-13 14:28:15.396577055 +0000 UTC m=+0.727785238,LastTimestamp:2024-12-13 14:28:15.396577055 +0000 UTC m=+0.727785238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-203,}" Dec 13 14:28:15.420290 kubelet[2484]: I1213 14:28:15.420265 2484 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:28:15.421331 kubelet[2484]: I1213 14:28:15.421310 2484 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:28:15.421536 kubelet[2484]: I1213 14:28:15.421523 2484 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:28:15.421761 kubelet[2484]: E1213 14:28:15.421748 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": dial tcp 172.31.23.203:6443: connect: connection refused" interval="200ms" Dec 13 14:28:15.425289 kubelet[2484]: E1213 14:28:15.425223 2484 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:28:15.426651 kubelet[2484]: W1213 14:28:15.426605 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.426799 kubelet[2484]: E1213 14:28:15.426788 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.432549 kubelet[2484]: I1213 14:28:15.432520 2484 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:28:15.432787 kubelet[2484]: I1213 14:28:15.432767 2484 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:28:15.433014 kubelet[2484]: I1213 14:28:15.432978 2484 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:28:15.455822 kubelet[2484]: I1213 14:28:15.455773 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:28:15.458181 kubelet[2484]: I1213 14:28:15.458153 2484 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:28:15.458423 kubelet[2484]: I1213 14:28:15.458407 2484 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:28:15.458530 kubelet[2484]: I1213 14:28:15.458518 2484 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:28:15.458665 kubelet[2484]: E1213 14:28:15.458654 2484 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:28:15.469959 kubelet[2484]: W1213 14:28:15.469848 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.470131 kubelet[2484]: E1213 14:28:15.469973 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:15.494078 kubelet[2484]: I1213 14:28:15.493940 2484 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:28:15.494078 kubelet[2484]: I1213 14:28:15.493960 2484 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:28:15.494364 kubelet[2484]: I1213 14:28:15.494096 2484 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:15.499662 kubelet[2484]: I1213 14:28:15.499619 2484 policy_none.go:49] "None policy: Start" Dec 13 14:28:15.500954 kubelet[2484]: I1213 14:28:15.500906 2484 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:28:15.501119 kubelet[2484]: I1213 14:28:15.501111 2484 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:28:15.508952 kubelet[2484]: I1213 14:28:15.508917 2484 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:28:15.509606 kubelet[2484]: I1213 14:28:15.509580 2484 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:28:15.517102 kubelet[2484]: E1213 14:28:15.517077 2484 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-203\" not found" Dec 13 14:28:15.522893 kubelet[2484]: I1213 14:28:15.522872 2484 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:15.524084 kubelet[2484]: E1213 14:28:15.524015 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.203:6443/api/v1/nodes\": dial tcp 172.31.23.203:6443: connect: connection refused" node="ip-172-31-23-203" Dec 13 14:28:15.559622 kubelet[2484]: I1213 14:28:15.559484 2484 topology_manager.go:215] "Topology Admit Handler" podUID="91ca8eac86021d28795ff8d5e2706570" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.564538 kubelet[2484]: I1213 14:28:15.564498 2484 topology_manager.go:215] "Topology Admit Handler" podUID="837be002c1b5108aa1509a0f1415eeea" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-203" Dec 13 14:28:15.578230 kubelet[2484]: I1213 14:28:15.578145 2484 topology_manager.go:215] "Topology Admit Handler" podUID="8bd2a85ea5ee90e4f208503eb13ab9e9" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-203" Dec 13 14:28:15.623643 kubelet[2484]: E1213 14:28:15.623578 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": dial tcp 172.31.23.203:6443: connect: connection refused" interval="400ms" Dec 13 14:28:15.723585 kubelet[2484]: I1213 14:28:15.723541 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.723585 kubelet[2484]: I1213 14:28:15.723605 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.723827 kubelet[2484]: I1213 14:28:15.723637 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.723827 kubelet[2484]: I1213 14:28:15.723665 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.723827 kubelet[2484]: I1213 14:28:15.723701 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:15.723827 kubelet[2484]: I1213 14:28:15.723730 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/837be002c1b5108aa1509a0f1415eeea-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-203\" (UID: \"837be002c1b5108aa1509a0f1415eeea\") " pod="kube-system/kube-scheduler-ip-172-31-23-203" Dec 13 14:28:15.723827 kubelet[2484]: I1213 14:28:15.723756 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-ca-certs\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:15.724095 kubelet[2484]: I1213 14:28:15.723783 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:15.724095 kubelet[2484]: I1213 14:28:15.723818 2484 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:15.726548 kubelet[2484]: I1213 14:28:15.726523 2484 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:15.727442 kubelet[2484]: E1213 14:28:15.727417 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.203:6443/api/v1/nodes\": dial tcp 172.31.23.203:6443: connect: connection refused" node="ip-172-31-23-203" Dec 13 14:28:15.886939 env[1828]: time="2024-12-13T14:28:15.886889761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-203,Uid:91ca8eac86021d28795ff8d5e2706570,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:15.918139 env[1828]: time="2024-12-13T14:28:15.918055214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-203,Uid:8bd2a85ea5ee90e4f208503eb13ab9e9,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:15.918139 env[1828]: time="2024-12-13T14:28:15.918055650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-203,Uid:837be002c1b5108aa1509a0f1415eeea,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:16.032403 kubelet[2484]: E1213 14:28:16.032356 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": dial tcp 172.31.23.203:6443: connect: connection refused" interval="800ms" Dec 13 14:28:16.129722 kubelet[2484]: I1213 14:28:16.129698 2484 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:16.130290 kubelet[2484]: E1213 14:28:16.130263 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.203:6443/api/v1/nodes\": dial tcp 172.31.23.203:6443: connect: connection refused" node="ip-172-31-23-203" Dec 13 14:28:16.245667 kubelet[2484]: W1213 14:28:16.245522 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.245667 kubelet[2484]: E1213 14:28:16.245596 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.471126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924322614.mount: Deactivated successfully. Dec 13 14:28:16.475486 kubelet[2484]: W1213 14:28:16.475340 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.475486 kubelet[2484]: E1213 14:28:16.475490 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.486537 env[1828]: time="2024-12-13T14:28:16.486045876Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.496664 env[1828]: time="2024-12-13T14:28:16.496267108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.501591 env[1828]: time="2024-12-13T14:28:16.501520997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.505894 env[1828]: time="2024-12-13T14:28:16.503268850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.511092 env[1828]: time="2024-12-13T14:28:16.511040724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.513946 env[1828]: time="2024-12-13T14:28:16.513898351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.516679 env[1828]: time="2024-12-13T14:28:16.516631103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.519006 env[1828]: time="2024-12-13T14:28:16.518945789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.521265 env[1828]: time="2024-12-13T14:28:16.521224263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.527656 env[1828]: time="2024-12-13T14:28:16.527607745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.532086 env[1828]: time="2024-12-13T14:28:16.532044193Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.547881 env[1828]: time="2024-12-13T14:28:16.547825241Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:16.564241 env[1828]: time="2024-12-13T14:28:16.564157730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:16.564241 env[1828]: time="2024-12-13T14:28:16.564202460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:16.564545 env[1828]: time="2024-12-13T14:28:16.564497584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:16.564879 env[1828]: time="2024-12-13T14:28:16.564834140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524 pid=2523 runtime=io.containerd.runc.v2 Dec 13 14:28:16.628221 env[1828]: time="2024-12-13T14:28:16.628145727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:16.628508 env[1828]: time="2024-12-13T14:28:16.628482102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:16.628627 env[1828]: time="2024-12-13T14:28:16.628605909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:16.629188 env[1828]: time="2024-12-13T14:28:16.629139850Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285 pid=2558 runtime=io.containerd.runc.v2 Dec 13 14:28:16.657020 env[1828]: time="2024-12-13T14:28:16.656943043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-203,Uid:91ca8eac86021d28795ff8d5e2706570,Namespace:kube-system,Attempt:0,} returns sandbox id \"7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524\"" Dec 13 14:28:16.659439 env[1828]: time="2024-12-13T14:28:16.658906184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:16.659439 env[1828]: time="2024-12-13T14:28:16.658952896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:16.659439 env[1828]: time="2024-12-13T14:28:16.658969621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:16.659439 env[1828]: time="2024-12-13T14:28:16.659156526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a23fdaf115b230fe1d4c15be0aab82084cb091f772c706ce7b652ec3d2029c5 pid=2578 runtime=io.containerd.runc.v2 Dec 13 14:28:16.663702 env[1828]: time="2024-12-13T14:28:16.663653748Z" level=info msg="CreateContainer within sandbox \"7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:28:16.745751 env[1828]: time="2024-12-13T14:28:16.745704683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-203,Uid:837be002c1b5108aa1509a0f1415eeea,Namespace:kube-system,Attempt:0,} returns sandbox id \"5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285\"" Dec 13 14:28:16.749063 env[1828]: time="2024-12-13T14:28:16.747864937Z" level=info msg="CreateContainer within sandbox \"7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c\"" Dec 13 14:28:16.750861 env[1828]: time="2024-12-13T14:28:16.750822968Z" level=info msg="CreateContainer within sandbox \"5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:28:16.753099 env[1828]: time="2024-12-13T14:28:16.751629147Z" level=info msg="StartContainer for \"b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c\"" Dec 13 14:28:16.784951 env[1828]: time="2024-12-13T14:28:16.784832683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-203,Uid:8bd2a85ea5ee90e4f208503eb13ab9e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a23fdaf115b230fe1d4c15be0aab82084cb091f772c706ce7b652ec3d2029c5\"" Dec 13 14:28:16.790316 env[1828]: time="2024-12-13T14:28:16.790272421Z" level=info msg="CreateContainer within sandbox \"7a23fdaf115b230fe1d4c15be0aab82084cb091f772c706ce7b652ec3d2029c5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:28:16.791357 env[1828]: time="2024-12-13T14:28:16.790523608Z" level=info msg="CreateContainer within sandbox \"5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c\"" Dec 13 14:28:16.795055 env[1828]: time="2024-12-13T14:28:16.794941169Z" level=info msg="StartContainer for \"17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c\"" Dec 13 14:28:16.833465 kubelet[2484]: E1213 14:28:16.833399 2484 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": dial tcp 172.31.23.203:6443: connect: connection refused" interval="1.6s" Dec 13 14:28:16.858315 env[1828]: time="2024-12-13T14:28:16.858234116Z" level=info msg="CreateContainer within sandbox \"7a23fdaf115b230fe1d4c15be0aab82084cb091f772c706ce7b652ec3d2029c5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"137ab0500b53ff3be96e4904d2fadb17f62d1b8da0d633cb2ecdd0d3b529155c\"" Dec 13 14:28:16.858929 env[1828]: time="2024-12-13T14:28:16.858893649Z" level=info msg="StartContainer for \"137ab0500b53ff3be96e4904d2fadb17f62d1b8da0d633cb2ecdd0d3b529155c\"" Dec 13 14:28:16.881209 env[1828]: time="2024-12-13T14:28:16.881159403Z" level=info msg="StartContainer for \"b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c\" returns successfully" Dec 13 14:28:16.896039 kubelet[2484]: W1213 14:28:16.895155 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-203&limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.896039 kubelet[2484]: E1213 14:28:16.895232 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-203&limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.935197 kubelet[2484]: I1213 14:28:16.934680 2484 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:16.935197 kubelet[2484]: E1213 14:28:16.935139 2484 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.203:6443/api/v1/nodes\": dial tcp 172.31.23.203:6443: connect: connection refused" node="ip-172-31-23-203" Dec 13 14:28:16.958702 env[1828]: time="2024-12-13T14:28:16.958651172Z" level=info msg="StartContainer for \"17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c\" returns successfully" Dec 13 14:28:16.967507 kubelet[2484]: W1213 14:28:16.967397 2484 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:16.967507 kubelet[2484]: E1213 14:28:16.967484 2484 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.203:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:17.065372 env[1828]: time="2024-12-13T14:28:17.065252451Z" level=info msg="StartContainer for \"137ab0500b53ff3be96e4904d2fadb17f62d1b8da0d633cb2ecdd0d3b529155c\" returns successfully" Dec 13 14:28:17.391780 kubelet[2484]: E1213 14:28:17.391712 2484 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.203:6443: connect: connection refused Dec 13 14:28:18.427173 update_engine[1815]: I1213 14:28:18.420106 1815 update_attempter.cc:509] Updating boot flags... Dec 13 14:28:18.546716 kubelet[2484]: I1213 14:28:18.543949 2484 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:20.871440 kubelet[2484]: E1213 14:28:20.871398 2484 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-203\" not found" node="ip-172-31-23-203" Dec 13 14:28:20.942933 kubelet[2484]: I1213 14:28:20.942763 2484 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-203" Dec 13 14:28:20.963263 kubelet[2484]: E1213 14:28:20.963212 2484 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-203.1810c2d95f157f1f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-203,UID:ip-172-31-23-203,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-203,},FirstTimestamp:2024-12-13 14:28:15.396577055 +0000 UTC m=+0.727785238,LastTimestamp:2024-12-13 14:28:15.396577055 +0000 UTC m=+0.727785238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-203,}" Dec 13 14:28:21.373295 kubelet[2484]: I1213 14:28:21.373238 2484 apiserver.go:52] "Watching apiserver" Dec 13 14:28:21.422632 kubelet[2484]: I1213 14:28:21.422578 2484 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:28:23.875772 systemd[1]: Reloading. Dec 13 14:28:24.016179 /usr/lib/systemd/system-generators/torcx-generator[2956]: time="2024-12-13T14:28:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:28:24.016768 /usr/lib/systemd/system-generators/torcx-generator[2956]: time="2024-12-13T14:28:24Z" level=info msg="torcx already run" Dec 13 14:28:24.157525 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:28:24.157548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:28:24.180109 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:28:24.310313 systemd[1]: Stopping kubelet.service... Dec 13 14:28:24.327433 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:28:24.327865 systemd[1]: Stopped kubelet.service. Dec 13 14:28:24.332720 systemd[1]: Starting kubelet.service... Dec 13 14:28:26.577595 systemd[1]: Started kubelet.service. Dec 13 14:28:26.744767 sudo[3034]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:28:26.747887 sudo[3034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:28:26.774518 kubelet[3022]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:26.775168 kubelet[3022]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:28:26.775261 kubelet[3022]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:28:26.775477 kubelet[3022]: I1213 14:28:26.775437 3022 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:28:26.801700 kubelet[3022]: I1213 14:28:26.801667 3022 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:28:26.802032 kubelet[3022]: I1213 14:28:26.802018 3022 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:28:26.802486 kubelet[3022]: I1213 14:28:26.802468 3022 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:28:26.805231 kubelet[3022]: I1213 14:28:26.805209 3022 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:28:26.831019 kubelet[3022]: I1213 14:28:26.825682 3022 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:28:26.837827 kubelet[3022]: I1213 14:28:26.837795 3022 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:28:26.839179 kubelet[3022]: I1213 14:28:26.839101 3022 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:28:26.839352 kubelet[3022]: I1213 14:28:26.839333 3022 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:28:26.839594 kubelet[3022]: I1213 14:28:26.839368 3022 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:28:26.839594 kubelet[3022]: I1213 14:28:26.839381 3022 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:28:26.839594 kubelet[3022]: I1213 14:28:26.839518 3022 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:26.841091 kubelet[3022]: I1213 14:28:26.841072 3022 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:28:26.841746 kubelet[3022]: I1213 14:28:26.841729 3022 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:28:26.841869 kubelet[3022]: I1213 14:28:26.841861 3022 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:28:26.842179 kubelet[3022]: I1213 14:28:26.842165 3022 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:28:26.855072 kubelet[3022]: I1213 14:28:26.852494 3022 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:28:26.855072 kubelet[3022]: I1213 14:28:26.852735 3022 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:28:26.855072 kubelet[3022]: I1213 14:28:26.853242 3022 server.go:1256] "Started kubelet" Dec 13 14:28:26.855984 kubelet[3022]: I1213 14:28:26.855963 3022 apiserver.go:52] "Watching apiserver" Dec 13 14:28:26.864531 kubelet[3022]: I1213 14:28:26.864295 3022 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:28:26.866681 kubelet[3022]: I1213 14:28:26.866652 3022 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:28:26.872089 kubelet[3022]: I1213 14:28:26.869398 3022 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:28:26.876819 kubelet[3022]: I1213 14:28:26.876780 3022 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:28:26.877053 kubelet[3022]: I1213 14:28:26.877035 3022 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:28:26.893014 kubelet[3022]: I1213 14:28:26.884089 3022 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:28:26.905843 kubelet[3022]: I1213 14:28:26.905440 3022 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:28:26.915629 kubelet[3022]: I1213 14:28:26.910049 3022 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:28:26.915629 kubelet[3022]: I1213 14:28:26.912636 3022 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:28:26.915629 kubelet[3022]: I1213 14:28:26.913067 3022 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:28:26.918383 kubelet[3022]: I1213 14:28:26.917604 3022 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:28:26.968584 kubelet[3022]: I1213 14:28:26.968197 3022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:28:26.987204 kubelet[3022]: I1213 14:28:26.985641 3022 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:28:26.987204 kubelet[3022]: I1213 14:28:26.985677 3022 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:28:26.987204 kubelet[3022]: I1213 14:28:26.985699 3022 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:28:26.987204 kubelet[3022]: E1213 14:28:26.986737 3022 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:28:27.001260 kubelet[3022]: I1213 14:28:26.995850 3022 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-203" Dec 13 14:28:27.017505 kubelet[3022]: I1213 14:28:27.016251 3022 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-203" Dec 13 14:28:27.017505 kubelet[3022]: I1213 14:28:27.016344 3022 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-203" Dec 13 14:28:27.086849 kubelet[3022]: E1213 14:28:27.086816 3022 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:28:27.104748 kubelet[3022]: I1213 14:28:27.104642 3022 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:28:27.104748 kubelet[3022]: I1213 14:28:27.104671 3022 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:28:27.104748 kubelet[3022]: I1213 14:28:27.104692 3022 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:28:27.105130 kubelet[3022]: I1213 14:28:27.104868 3022 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:28:27.105130 kubelet[3022]: I1213 14:28:27.105034 3022 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:28:27.105130 kubelet[3022]: I1213 14:28:27.105049 3022 policy_none.go:49] "None policy: Start" Dec 13 14:28:27.106110 kubelet[3022]: I1213 14:28:27.106089 3022 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:28:27.106209 kubelet[3022]: I1213 14:28:27.106132 3022 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:28:27.106384 kubelet[3022]: I1213 14:28:27.106343 3022 state_mem.go:75] "Updated machine memory state" Dec 13 14:28:27.108233 kubelet[3022]: I1213 14:28:27.108210 3022 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:28:27.117923 kubelet[3022]: I1213 14:28:27.117080 3022 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:28:27.287745 kubelet[3022]: I1213 14:28:27.287704 3022 topology_manager.go:215] "Topology Admit Handler" podUID="8bd2a85ea5ee90e4f208503eb13ab9e9" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-203" Dec 13 14:28:27.288075 kubelet[3022]: I1213 14:28:27.288060 3022 topology_manager.go:215] "Topology Admit Handler" podUID="91ca8eac86021d28795ff8d5e2706570" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.288229 kubelet[3022]: I1213 14:28:27.288209 3022 topology_manager.go:215] "Topology Admit Handler" podUID="837be002c1b5108aa1509a0f1415eeea" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-203" Dec 13 14:28:27.314907 kubelet[3022]: I1213 14:28:27.314864 3022 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:28:27.338451 kubelet[3022]: I1213 14:28:27.338418 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.338738 kubelet[3022]: I1213 14:28:27.338706 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/837be002c1b5108aa1509a0f1415eeea-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-203\" (UID: \"837be002c1b5108aa1509a0f1415eeea\") " pod="kube-system/kube-scheduler-ip-172-31-23-203" Dec 13 14:28:27.339070 kubelet[3022]: I1213 14:28:27.339054 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:27.339767 kubelet[3022]: I1213 14:28:27.339755 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:27.339890 kubelet[3022]: I1213 14:28:27.339882 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.339976 kubelet[3022]: I1213 14:28:27.339969 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.340422 kubelet[3022]: I1213 14:28:27.340407 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.340534 kubelet[3022]: I1213 14:28:27.340483 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-203" podStartSLOduration=5.340437379 podStartE2EDuration="5.340437379s" podCreationTimestamp="2024-12-13 14:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:27.326596683 +0000 UTC m=+0.702112539" watchObservedRunningTime="2024-12-13 14:28:27.340437379 +0000 UTC m=+0.715953235" Dec 13 14:28:27.340660 kubelet[3022]: I1213 14:28:27.340644 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-203" podStartSLOduration=3.340604077 podStartE2EDuration="3.340604077s" podCreationTimestamp="2024-12-13 14:28:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:27.339593561 +0000 UTC m=+0.715109418" watchObservedRunningTime="2024-12-13 14:28:27.340604077 +0000 UTC m=+0.716119934" Dec 13 14:28:27.340858 kubelet[3022]: I1213 14:28:27.340844 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/91ca8eac86021d28795ff8d5e2706570-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-203\" (UID: \"91ca8eac86021d28795ff8d5e2706570\") " pod="kube-system/kube-controller-manager-ip-172-31-23-203" Dec 13 14:28:27.340984 kubelet[3022]: I1213 14:28:27.340975 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8bd2a85ea5ee90e4f208503eb13ab9e9-ca-certs\") pod \"kube-apiserver-ip-172-31-23-203\" (UID: \"8bd2a85ea5ee90e4f208503eb13ab9e9\") " pod="kube-system/kube-apiserver-ip-172-31-23-203" Dec 13 14:28:27.808759 sudo[3034]: pam_unix(sudo:session): session closed for user root Dec 13 14:28:28.020963 kubelet[3022]: I1213 14:28:28.020927 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-203" podStartSLOduration=1.020875794 podStartE2EDuration="1.020875794s" podCreationTimestamp="2024-12-13 14:28:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:27.367132063 +0000 UTC m=+0.742647921" watchObservedRunningTime="2024-12-13 14:28:28.020875794 +0000 UTC m=+1.396391649" Dec 13 14:28:30.587808 sudo[2100]: pam_unix(sudo:session): session closed for user root Dec 13 14:28:30.612311 sshd[2096]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:30.621167 systemd[1]: sshd@4-172.31.23.203:22-139.178.89.65:41102.service: Deactivated successfully. Dec 13 14:28:30.624148 systemd-logind[1813]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:28:30.624941 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:28:30.627477 systemd-logind[1813]: Removed session 5. Dec 13 14:28:39.728012 kubelet[3022]: I1213 14:28:39.727971 3022 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:28:39.728737 env[1828]: time="2024-12-13T14:28:39.728639414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:28:39.729163 kubelet[3022]: I1213 14:28:39.729142 3022 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:28:40.689148 kubelet[3022]: I1213 14:28:40.689111 3022 topology_manager.go:215] "Topology Admit Handler" podUID="8fa0dc61-8dc9-44e5-83ee-6d056850a14e" podNamespace="kube-system" podName="kube-proxy-kvv2n" Dec 13 14:28:40.706303 kubelet[3022]: I1213 14:28:40.706270 3022 topology_manager.go:215] "Topology Admit Handler" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" podNamespace="kube-system" podName="cilium-26szp" Dec 13 14:28:40.741881 kubelet[3022]: I1213 14:28:40.741837 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8fa0dc61-8dc9-44e5-83ee-6d056850a14e-kube-proxy\") pod \"kube-proxy-kvv2n\" (UID: \"8fa0dc61-8dc9-44e5-83ee-6d056850a14e\") " pod="kube-system/kube-proxy-kvv2n" Dec 13 14:28:40.742440 kubelet[3022]: I1213 14:28:40.741908 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8fa0dc61-8dc9-44e5-83ee-6d056850a14e-lib-modules\") pod \"kube-proxy-kvv2n\" (UID: \"8fa0dc61-8dc9-44e5-83ee-6d056850a14e\") " pod="kube-system/kube-proxy-kvv2n" Dec 13 14:28:40.742440 kubelet[3022]: I1213 14:28:40.741934 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8fa0dc61-8dc9-44e5-83ee-6d056850a14e-xtables-lock\") pod \"kube-proxy-kvv2n\" (UID: \"8fa0dc61-8dc9-44e5-83ee-6d056850a14e\") " pod="kube-system/kube-proxy-kvv2n" Dec 13 14:28:40.742440 kubelet[3022]: I1213 14:28:40.741976 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9k2\" (UniqueName: \"kubernetes.io/projected/8fa0dc61-8dc9-44e5-83ee-6d056850a14e-kube-api-access-8m9k2\") pod \"kube-proxy-kvv2n\" (UID: \"8fa0dc61-8dc9-44e5-83ee-6d056850a14e\") " pod="kube-system/kube-proxy-kvv2n" Dec 13 14:28:40.842619 kubelet[3022]: I1213 14:28:40.842583 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-cgroup\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.842795 kubelet[3022]: I1213 14:28:40.842635 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-xtables-lock\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.842795 kubelet[3022]: I1213 14:28:40.842710 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hostproc\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.842795 kubelet[3022]: I1213 14:28:40.842767 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cni-path\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843028 kubelet[3022]: I1213 14:28:40.842797 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-kernel\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843028 kubelet[3022]: I1213 14:28:40.842848 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-bpf-maps\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843028 kubelet[3022]: I1213 14:28:40.842896 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-lib-modules\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843028 kubelet[3022]: I1213 14:28:40.842928 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hubble-tls\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843028 kubelet[3022]: I1213 14:28:40.842973 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfx5\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-kube-api-access-jtfx5\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843262 kubelet[3022]: I1213 14:28:40.843045 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-etc-cni-netd\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843262 kubelet[3022]: I1213 14:28:40.843095 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-net\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843262 kubelet[3022]: I1213 14:28:40.843146 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-run\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843262 kubelet[3022]: I1213 14:28:40.843195 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5a2ab86-e5b8-47b0-9f77-5077add6b195-clustermesh-secrets\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.843262 kubelet[3022]: I1213 14:28:40.843244 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-config-path\") pod \"cilium-26szp\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " pod="kube-system/cilium-26szp" Dec 13 14:28:40.887407 kubelet[3022]: I1213 14:28:40.887367 3022 topology_manager.go:215] "Topology Admit Handler" podUID="de73ea01-7505-4165-a4f5-18a5c3f70754" podNamespace="kube-system" podName="cilium-operator-5cc964979-47hjr" Dec 13 14:28:40.944460 kubelet[3022]: I1213 14:28:40.944121 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de73ea01-7505-4165-a4f5-18a5c3f70754-cilium-config-path\") pod \"cilium-operator-5cc964979-47hjr\" (UID: \"de73ea01-7505-4165-a4f5-18a5c3f70754\") " pod="kube-system/cilium-operator-5cc964979-47hjr" Dec 13 14:28:40.944621 kubelet[3022]: I1213 14:28:40.944569 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxl95\" (UniqueName: \"kubernetes.io/projected/de73ea01-7505-4165-a4f5-18a5c3f70754-kube-api-access-dxl95\") pod \"cilium-operator-5cc964979-47hjr\" (UID: \"de73ea01-7505-4165-a4f5-18a5c3f70754\") " pod="kube-system/cilium-operator-5cc964979-47hjr" Dec 13 14:28:41.005383 env[1828]: time="2024-12-13T14:28:41.003863765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvv2n,Uid:8fa0dc61-8dc9-44e5-83ee-6d056850a14e,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:41.016200 env[1828]: time="2024-12-13T14:28:41.016151063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26szp,Uid:b5a2ab86-e5b8-47b0-9f77-5077add6b195,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:41.086855 env[1828]: time="2024-12-13T14:28:41.086735482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:41.087079 env[1828]: time="2024-12-13T14:28:41.086830119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:41.087259 env[1828]: time="2024-12-13T14:28:41.087074379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:41.087666 env[1828]: time="2024-12-13T14:28:41.087589627Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dacc6520981ae9330938c26d432d5c27ae3960ce4e6b9743de53762f6a24720 pid=3107 runtime=io.containerd.runc.v2 Dec 13 14:28:41.092387 env[1828]: time="2024-12-13T14:28:41.092299260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:41.092564 env[1828]: time="2024-12-13T14:28:41.092425560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:41.092564 env[1828]: time="2024-12-13T14:28:41.092459855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:41.092772 env[1828]: time="2024-12-13T14:28:41.092711946Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4 pid=3118 runtime=io.containerd.runc.v2 Dec 13 14:28:41.194397 env[1828]: time="2024-12-13T14:28:41.194356458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47hjr,Uid:de73ea01-7505-4165-a4f5-18a5c3f70754,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:41.199521 env[1828]: time="2024-12-13T14:28:41.199320575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kvv2n,Uid:8fa0dc61-8dc9-44e5-83ee-6d056850a14e,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dacc6520981ae9330938c26d432d5c27ae3960ce4e6b9743de53762f6a24720\"" Dec 13 14:28:41.216541 env[1828]: time="2024-12-13T14:28:41.216492381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-26szp,Uid:b5a2ab86-e5b8-47b0-9f77-5077add6b195,Namespace:kube-system,Attempt:0,} returns sandbox id \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\"" Dec 13 14:28:41.226231 env[1828]: time="2024-12-13T14:28:41.226188272Z" level=info msg="CreateContainer within sandbox \"9dacc6520981ae9330938c26d432d5c27ae3960ce4e6b9743de53762f6a24720\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:28:41.227898 env[1828]: time="2024-12-13T14:28:41.226808872Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:28:41.242039 env[1828]: time="2024-12-13T14:28:41.241109709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:41.242039 env[1828]: time="2024-12-13T14:28:41.241175568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:41.242039 env[1828]: time="2024-12-13T14:28:41.241190005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:41.242532 env[1828]: time="2024-12-13T14:28:41.242219226Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635 pid=3189 runtime=io.containerd.runc.v2 Dec 13 14:28:41.280496 env[1828]: time="2024-12-13T14:28:41.280352192Z" level=info msg="CreateContainer within sandbox \"9dacc6520981ae9330938c26d432d5c27ae3960ce4e6b9743de53762f6a24720\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1f68e855f2dfc2b1b932e9b226b51efc8c8ef8e29566aa71d0a140d31e398f1\"" Dec 13 14:28:41.284519 env[1828]: time="2024-12-13T14:28:41.284470815Z" level=info msg="StartContainer for \"d1f68e855f2dfc2b1b932e9b226b51efc8c8ef8e29566aa71d0a140d31e398f1\"" Dec 13 14:28:41.382312 env[1828]: time="2024-12-13T14:28:41.381891325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-47hjr,Uid:de73ea01-7505-4165-a4f5-18a5c3f70754,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\"" Dec 13 14:28:41.465352 env[1828]: time="2024-12-13T14:28:41.465255924Z" level=info msg="StartContainer for \"d1f68e855f2dfc2b1b932e9b226b51efc8c8ef8e29566aa71d0a140d31e398f1\" returns successfully" Dec 13 14:28:48.949669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641637096.mount: Deactivated successfully. Dec 13 14:28:53.203713 env[1828]: time="2024-12-13T14:28:53.203662612Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:53.208228 env[1828]: time="2024-12-13T14:28:53.207970769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:53.211696 env[1828]: time="2024-12-13T14:28:53.211651402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:53.212864 env[1828]: time="2024-12-13T14:28:53.212824614Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:28:53.214525 env[1828]: time="2024-12-13T14:28:53.214469407Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:28:53.217079 env[1828]: time="2024-12-13T14:28:53.217037716Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:53.243393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465060331.mount: Deactivated successfully. Dec 13 14:28:53.260524 env[1828]: time="2024-12-13T14:28:53.260354003Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\"" Dec 13 14:28:53.263849 env[1828]: time="2024-12-13T14:28:53.263809894Z" level=info msg="StartContainer for \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\"" Dec 13 14:28:53.337829 env[1828]: time="2024-12-13T14:28:53.335178398Z" level=info msg="StartContainer for \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\" returns successfully" Dec 13 14:28:53.859626 amazon-ssm-agent[1799]: 2024-12-13 14:28:53 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:28:53.892162 env[1828]: time="2024-12-13T14:28:53.892091023Z" level=info msg="shim disconnected" id=947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e Dec 13 14:28:53.892544 env[1828]: time="2024-12-13T14:28:53.892517182Z" level=warning msg="cleaning up after shim disconnected" id=947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e namespace=k8s.io Dec 13 14:28:53.892739 env[1828]: time="2024-12-13T14:28:53.892652067Z" level=info msg="cleaning up dead shim" Dec 13 14:28:53.902954 env[1828]: time="2024-12-13T14:28:53.902897668Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3427 runtime=io.containerd.runc.v2\n" Dec 13 14:28:54.086144 env[1828]: time="2024-12-13T14:28:54.084776634Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:54.105027 kubelet[3022]: I1213 14:28:54.104985 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kvv2n" podStartSLOduration=14.104944507 podStartE2EDuration="14.104944507s" podCreationTimestamp="2024-12-13 14:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:42.075543954 +0000 UTC m=+15.451059811" watchObservedRunningTime="2024-12-13 14:28:54.104944507 +0000 UTC m=+27.480460367" Dec 13 14:28:54.117239 env[1828]: time="2024-12-13T14:28:54.116600644Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\"" Dec 13 14:28:54.120271 env[1828]: time="2024-12-13T14:28:54.117781909Z" level=info msg="StartContainer for \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\"" Dec 13 14:28:54.179015 env[1828]: time="2024-12-13T14:28:54.178951591Z" level=info msg="StartContainer for \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\" returns successfully" Dec 13 14:28:54.192407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:28:54.192796 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:28:54.193352 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:28:54.200339 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:28:54.227746 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:28:54.242380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e-rootfs.mount: Deactivated successfully. Dec 13 14:28:54.249571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1-rootfs.mount: Deactivated successfully. Dec 13 14:28:54.273905 env[1828]: time="2024-12-13T14:28:54.273856451Z" level=info msg="shim disconnected" id=ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1 Dec 13 14:28:54.275432 env[1828]: time="2024-12-13T14:28:54.275395031Z" level=warning msg="cleaning up after shim disconnected" id=ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1 namespace=k8s.io Dec 13 14:28:54.275432 env[1828]: time="2024-12-13T14:28:54.275426857Z" level=info msg="cleaning up dead shim" Dec 13 14:28:54.293289 env[1828]: time="2024-12-13T14:28:54.293237323Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3497 runtime=io.containerd.runc.v2\n" Dec 13 14:28:54.817879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359923196.mount: Deactivated successfully. Dec 13 14:28:55.106523 env[1828]: time="2024-12-13T14:28:55.105913755Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:55.214244 env[1828]: time="2024-12-13T14:28:55.214176496Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\"" Dec 13 14:28:55.216198 env[1828]: time="2024-12-13T14:28:55.214915330Z" level=info msg="StartContainer for \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\"" Dec 13 14:28:55.241771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009673741.mount: Deactivated successfully. Dec 13 14:28:55.350963 env[1828]: time="2024-12-13T14:28:55.350911990Z" level=info msg="StartContainer for \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\" returns successfully" Dec 13 14:28:55.388169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a-rootfs.mount: Deactivated successfully. Dec 13 14:28:55.414832 env[1828]: time="2024-12-13T14:28:55.414776565Z" level=info msg="shim disconnected" id=4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a Dec 13 14:28:55.414832 env[1828]: time="2024-12-13T14:28:55.414831334Z" level=warning msg="cleaning up after shim disconnected" id=4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a namespace=k8s.io Dec 13 14:28:55.415302 env[1828]: time="2024-12-13T14:28:55.414844374Z" level=info msg="cleaning up dead shim" Dec 13 14:28:55.426814 env[1828]: time="2024-12-13T14:28:55.426769120Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3555 runtime=io.containerd.runc.v2\n" Dec 13 14:28:56.146042 env[1828]: time="2024-12-13T14:28:56.145976588Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:56.206353 env[1828]: time="2024-12-13T14:28:56.206310186Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\"" Dec 13 14:28:56.207453 env[1828]: time="2024-12-13T14:28:56.207354159Z" level=info msg="StartContainer for \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\"" Dec 13 14:28:56.242849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2665933874.mount: Deactivated successfully. Dec 13 14:28:56.331098 env[1828]: time="2024-12-13T14:28:56.331052269Z" level=info msg="StartContainer for \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\" returns successfully" Dec 13 14:28:56.376134 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30-rootfs.mount: Deactivated successfully. Dec 13 14:28:56.718913 env[1828]: time="2024-12-13T14:28:56.718859260Z" level=info msg="shim disconnected" id=2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30 Dec 13 14:28:56.718913 env[1828]: time="2024-12-13T14:28:56.718910881Z" level=warning msg="cleaning up after shim disconnected" id=2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30 namespace=k8s.io Dec 13 14:28:56.720107 env[1828]: time="2024-12-13T14:28:56.718923440Z" level=info msg="cleaning up dead shim" Dec 13 14:28:56.763724 env[1828]: time="2024-12-13T14:28:56.763670473Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3609 runtime=io.containerd.runc.v2\n" Dec 13 14:28:56.795579 env[1828]: time="2024-12-13T14:28:56.795023729Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:56.803621 env[1828]: time="2024-12-13T14:28:56.803578490Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:56.813485 env[1828]: time="2024-12-13T14:28:56.813439152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:28:56.815056 env[1828]: time="2024-12-13T14:28:56.814803735Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:28:56.826183 env[1828]: time="2024-12-13T14:28:56.825713020Z" level=info msg="CreateContainer within sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:28:56.864393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890001482.mount: Deactivated successfully. Dec 13 14:28:56.873452 env[1828]: time="2024-12-13T14:28:56.873398868Z" level=info msg="CreateContainer within sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\"" Dec 13 14:28:56.875803 env[1828]: time="2024-12-13T14:28:56.874566788Z" level=info msg="StartContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\"" Dec 13 14:28:56.943098 env[1828]: time="2024-12-13T14:28:56.943037334Z" level=info msg="StartContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" returns successfully" Dec 13 14:28:57.122299 env[1828]: time="2024-12-13T14:28:57.122243388Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:57.156678 env[1828]: time="2024-12-13T14:28:57.156616443Z" level=info msg="CreateContainer within sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\"" Dec 13 14:28:57.157771 env[1828]: time="2024-12-13T14:28:57.157735873Z" level=info msg="StartContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\"" Dec 13 14:28:57.247489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070420984.mount: Deactivated successfully. Dec 13 14:28:57.438188 env[1828]: time="2024-12-13T14:28:57.438073283Z" level=info msg="StartContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" returns successfully" Dec 13 14:28:57.917713 kubelet[3022]: I1213 14:28:57.917685 3022 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:28:58.190284 kubelet[3022]: I1213 14:28:58.190172 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-47hjr" podStartSLOduration=2.763974455 podStartE2EDuration="18.190112449s" podCreationTimestamp="2024-12-13 14:28:40 +0000 UTC" firstStartedPulling="2024-12-13 14:28:41.389121486 +0000 UTC m=+14.764637333" lastFinishedPulling="2024-12-13 14:28:56.815259489 +0000 UTC m=+30.190775327" observedRunningTime="2024-12-13 14:28:57.233250706 +0000 UTC m=+30.608766562" watchObservedRunningTime="2024-12-13 14:28:58.190112449 +0000 UTC m=+31.565628305" Dec 13 14:28:58.190496 kubelet[3022]: I1213 14:28:58.190370 3022 topology_manager.go:215] "Topology Admit Handler" podUID="b85a1fe2-508d-4331-b6b7-108f740288a2" podNamespace="kube-system" podName="coredns-76f75df574-hhkkj" Dec 13 14:28:58.205288 kubelet[3022]: I1213 14:28:58.205252 3022 topology_manager.go:215] "Topology Admit Handler" podUID="d7e1375d-f055-45c3-b45e-9815ee581abf" podNamespace="kube-system" podName="coredns-76f75df574-q2r4z" Dec 13 14:28:58.234107 kubelet[3022]: W1213 14:28:58.234067 3022 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-23-203" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-203' and this object Dec 13 14:28:58.234332 kubelet[3022]: E1213 14:28:58.234318 3022 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-23-203" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-23-203' and this object Dec 13 14:28:58.298237 kubelet[3022]: I1213 14:28:58.298132 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-26szp" podStartSLOduration=6.305620838 podStartE2EDuration="18.297928643s" podCreationTimestamp="2024-12-13 14:28:40 +0000 UTC" firstStartedPulling="2024-12-13 14:28:41.221090073 +0000 UTC m=+14.596605906" lastFinishedPulling="2024-12-13 14:28:53.213397859 +0000 UTC m=+26.588913711" observedRunningTime="2024-12-13 14:28:58.297673328 +0000 UTC m=+31.673189184" watchObservedRunningTime="2024-12-13 14:28:58.297928643 +0000 UTC m=+31.673444512" Dec 13 14:28:58.361588 kubelet[3022]: I1213 14:28:58.361506 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b85a1fe2-508d-4331-b6b7-108f740288a2-config-volume\") pod \"coredns-76f75df574-hhkkj\" (UID: \"b85a1fe2-508d-4331-b6b7-108f740288a2\") " pod="kube-system/coredns-76f75df574-hhkkj" Dec 13 14:28:58.361886 kubelet[3022]: I1213 14:28:58.361869 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr2f2\" (UniqueName: \"kubernetes.io/projected/b85a1fe2-508d-4331-b6b7-108f740288a2-kube-api-access-sr2f2\") pod \"coredns-76f75df574-hhkkj\" (UID: \"b85a1fe2-508d-4331-b6b7-108f740288a2\") " pod="kube-system/coredns-76f75df574-hhkkj" Dec 13 14:28:58.362094 kubelet[3022]: I1213 14:28:58.362080 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7e1375d-f055-45c3-b45e-9815ee581abf-config-volume\") pod \"coredns-76f75df574-q2r4z\" (UID: \"d7e1375d-f055-45c3-b45e-9815ee581abf\") " pod="kube-system/coredns-76f75df574-q2r4z" Dec 13 14:28:58.362266 kubelet[3022]: I1213 14:28:58.362246 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhkb\" (UniqueName: \"kubernetes.io/projected/d7e1375d-f055-45c3-b45e-9815ee581abf-kube-api-access-dnhkb\") pod \"coredns-76f75df574-q2r4z\" (UID: \"d7e1375d-f055-45c3-b45e-9815ee581abf\") " pod="kube-system/coredns-76f75df574-q2r4z" Dec 13 14:28:59.408716 env[1828]: time="2024-12-13T14:28:59.408281714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hhkkj,Uid:b85a1fe2-508d-4331-b6b7-108f740288a2,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:59.428185 env[1828]: time="2024-12-13T14:28:59.428127438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q2r4z,Uid:d7e1375d-f055-45c3-b45e-9815ee581abf,Namespace:kube-system,Attempt:0,}" Dec 13 14:29:00.894591 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:29:00.894750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:29:00.893828 systemd-networkd[1507]: cilium_host: Link UP Dec 13 14:29:00.894364 systemd-networkd[1507]: cilium_net: Link UP Dec 13 14:29:00.895829 systemd-networkd[1507]: cilium_net: Gained carrier Dec 13 14:29:00.896826 systemd-networkd[1507]: cilium_host: Gained carrier Dec 13 14:29:00.899797 (udev-worker)[3804]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:00.900291 (udev-worker)[3741]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:01.170532 (udev-worker)[3823]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:01.196111 systemd-networkd[1507]: cilium_vxlan: Link UP Dec 13 14:29:01.196120 systemd-networkd[1507]: cilium_vxlan: Gained carrier Dec 13 14:29:01.329560 systemd-networkd[1507]: cilium_net: Gained IPv6LL Dec 13 14:29:01.889172 systemd-networkd[1507]: cilium_host: Gained IPv6LL Dec 13 14:29:02.529677 systemd-networkd[1507]: cilium_vxlan: Gained IPv6LL Dec 13 14:29:02.667463 kernel: NET: Registered PF_ALG protocol family Dec 13 14:29:05.150804 (udev-worker)[3824]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:05.180808 systemd-networkd[1507]: lxc_health: Link UP Dec 13 14:29:05.195016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:29:05.195880 systemd-networkd[1507]: lxc_health: Gained carrier Dec 13 14:29:05.580902 (udev-worker)[4139]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:29:05.591511 systemd-networkd[1507]: lxc7f80ab5b3a33: Link UP Dec 13 14:29:05.599403 systemd-networkd[1507]: lxc4a1b69f92806: Link UP Dec 13 14:29:05.605129 kernel: eth0: renamed from tmpd5329 Dec 13 14:29:05.611026 kernel: eth0: renamed from tmp046ec Dec 13 14:29:05.617112 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7f80ab5b3a33: link becomes ready Dec 13 14:29:05.617369 systemd-networkd[1507]: lxc7f80ab5b3a33: Gained carrier Dec 13 14:29:05.620633 systemd-networkd[1507]: lxc4a1b69f92806: Gained carrier Dec 13 14:29:05.622241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4a1b69f92806: link becomes ready Dec 13 14:29:06.927164 systemd-networkd[1507]: lxc7f80ab5b3a33: Gained IPv6LL Dec 13 14:29:06.927504 systemd-networkd[1507]: lxc_health: Gained IPv6LL Dec 13 14:29:07.265171 systemd-networkd[1507]: lxc4a1b69f92806: Gained IPv6LL Dec 13 14:29:12.593906 env[1828]: time="2024-12-13T14:29:12.593818189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:12.594561 env[1828]: time="2024-12-13T14:29:12.594523084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:12.595087 env[1828]: time="2024-12-13T14:29:12.595021583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:12.595639 env[1828]: time="2024-12-13T14:29:12.595600069Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d53292e31ce5780847e770ce46d31367c147439babf7c86b79f22b8a3b4ac223 pid=4183 runtime=io.containerd.runc.v2 Dec 13 14:29:12.605323 env[1828]: time="2024-12-13T14:29:12.605232385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:29:12.632259 env[1828]: time="2024-12-13T14:29:12.613056186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:29:12.632259 env[1828]: time="2024-12-13T14:29:12.613171820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:29:12.632259 env[1828]: time="2024-12-13T14:29:12.614129749Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/046ecc395a27d8b667d9dee25a68cb890e7bd05e6948fe076f3ad91a5bc36f98 pid=4193 runtime=io.containerd.runc.v2 Dec 13 14:29:12.672827 systemd[1]: run-containerd-runc-k8s.io-d53292e31ce5780847e770ce46d31367c147439babf7c86b79f22b8a3b4ac223-runc.BUUdXS.mount: Deactivated successfully. Dec 13 14:29:12.865484 env[1828]: time="2024-12-13T14:29:12.865439323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hhkkj,Uid:b85a1fe2-508d-4331-b6b7-108f740288a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"046ecc395a27d8b667d9dee25a68cb890e7bd05e6948fe076f3ad91a5bc36f98\"" Dec 13 14:29:12.880841 env[1828]: time="2024-12-13T14:29:12.880794418Z" level=info msg="CreateContainer within sandbox \"046ecc395a27d8b667d9dee25a68cb890e7bd05e6948fe076f3ad91a5bc36f98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:29:12.938688 env[1828]: time="2024-12-13T14:29:12.938640622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-q2r4z,Uid:d7e1375d-f055-45c3-b45e-9815ee581abf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d53292e31ce5780847e770ce46d31367c147439babf7c86b79f22b8a3b4ac223\"" Dec 13 14:29:12.957549 env[1828]: time="2024-12-13T14:29:12.957474639Z" level=info msg="CreateContainer within sandbox \"d53292e31ce5780847e770ce46d31367c147439babf7c86b79f22b8a3b4ac223\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:29:13.033776 env[1828]: time="2024-12-13T14:29:13.033676529Z" level=info msg="CreateContainer within sandbox \"046ecc395a27d8b667d9dee25a68cb890e7bd05e6948fe076f3ad91a5bc36f98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9aae61fab634d3995bb1e4b9ac9e6bcac8c3ba223321fc08a09f692b138051f6\"" Dec 13 14:29:13.053246 env[1828]: time="2024-12-13T14:29:13.053200974Z" level=info msg="StartContainer for \"9aae61fab634d3995bb1e4b9ac9e6bcac8c3ba223321fc08a09f692b138051f6\"" Dec 13 14:29:13.064914 env[1828]: time="2024-12-13T14:29:13.064860739Z" level=info msg="CreateContainer within sandbox \"d53292e31ce5780847e770ce46d31367c147439babf7c86b79f22b8a3b4ac223\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3bf95aaf8bfa4013e861cb200082367dbf72c4ffc63cd08bb094f18ee8c6007a\"" Dec 13 14:29:13.066881 env[1828]: time="2024-12-13T14:29:13.066829319Z" level=info msg="StartContainer for \"3bf95aaf8bfa4013e861cb200082367dbf72c4ffc63cd08bb094f18ee8c6007a\"" Dec 13 14:29:13.191932 env[1828]: time="2024-12-13T14:29:13.191831941Z" level=info msg="StartContainer for \"9aae61fab634d3995bb1e4b9ac9e6bcac8c3ba223321fc08a09f692b138051f6\" returns successfully" Dec 13 14:29:13.201043 env[1828]: time="2024-12-13T14:29:13.200930468Z" level=info msg="StartContainer for \"3bf95aaf8bfa4013e861cb200082367dbf72c4ffc63cd08bb094f18ee8c6007a\" returns successfully" Dec 13 14:29:13.288314 kubelet[3022]: I1213 14:29:13.286142 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-q2r4z" podStartSLOduration=33.285316581000004 podStartE2EDuration="33.285316581s" podCreationTimestamp="2024-12-13 14:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:13.284079162 +0000 UTC m=+46.659595019" watchObservedRunningTime="2024-12-13 14:29:13.285316581 +0000 UTC m=+46.660832437" Dec 13 14:29:14.269660 kubelet[3022]: I1213 14:29:14.269618 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hhkkj" podStartSLOduration=34.269556552 podStartE2EDuration="34.269556552s" podCreationTimestamp="2024-12-13 14:28:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:29:13.330865329 +0000 UTC m=+46.706381187" watchObservedRunningTime="2024-12-13 14:29:14.269556552 +0000 UTC m=+47.645072409" Dec 13 14:29:16.005587 systemd[1]: Started sshd@5-172.31.23.203:22-139.178.89.65:44974.service. Dec 13 14:29:16.237131 sshd[4340]: Accepted publickey for core from 139.178.89.65 port 44974 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:16.239610 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:16.251430 systemd[1]: Started session-6.scope. Dec 13 14:29:16.251939 systemd-logind[1813]: New session 6 of user core. Dec 13 14:29:16.622733 sshd[4340]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:16.626572 systemd[1]: sshd@5-172.31.23.203:22-139.178.89.65:44974.service: Deactivated successfully. Dec 13 14:29:16.629100 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:29:16.630038 systemd-logind[1813]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:29:16.633252 systemd-logind[1813]: Removed session 6. Dec 13 14:29:21.647141 systemd[1]: Started sshd@6-172.31.23.203:22-139.178.89.65:33078.service. Dec 13 14:29:21.816562 sshd[4353]: Accepted publickey for core from 139.178.89.65 port 33078 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:21.819325 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:21.836760 systemd[1]: Started session-7.scope. Dec 13 14:29:21.838612 systemd-logind[1813]: New session 7 of user core. Dec 13 14:29:22.120394 sshd[4353]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:22.124943 systemd[1]: sshd@6-172.31.23.203:22-139.178.89.65:33078.service: Deactivated successfully. Dec 13 14:29:22.128299 systemd-logind[1813]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:29:22.128502 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:29:22.130970 systemd-logind[1813]: Removed session 7. Dec 13 14:29:27.147043 systemd[1]: Started sshd@7-172.31.23.203:22-139.178.89.65:33084.service. Dec 13 14:29:27.312807 sshd[4369]: Accepted publickey for core from 139.178.89.65 port 33084 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:27.314535 sshd[4369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:27.324670 systemd[1]: Started session-8.scope. Dec 13 14:29:27.326066 systemd-logind[1813]: New session 8 of user core. Dec 13 14:29:27.572670 sshd[4369]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:27.577100 systemd[1]: sshd@7-172.31.23.203:22-139.178.89.65:33084.service: Deactivated successfully. Dec 13 14:29:27.579792 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:29:27.580692 systemd-logind[1813]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:29:27.582646 systemd-logind[1813]: Removed session 8. Dec 13 14:29:32.588747 systemd[1]: Started sshd@8-172.31.23.203:22-139.178.89.65:35814.service. Dec 13 14:29:32.774374 sshd[4383]: Accepted publickey for core from 139.178.89.65 port 35814 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:32.776664 sshd[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:32.793112 systemd-logind[1813]: New session 9 of user core. Dec 13 14:29:32.793341 systemd[1]: Started session-9.scope. Dec 13 14:29:33.034519 sshd[4383]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:33.039193 systemd-logind[1813]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:29:33.039534 systemd[1]: sshd@8-172.31.23.203:22-139.178.89.65:35814.service: Deactivated successfully. Dec 13 14:29:33.041184 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:29:33.041940 systemd-logind[1813]: Removed session 9. Dec 13 14:29:38.060579 systemd[1]: Started sshd@9-172.31.23.203:22-139.178.89.65:50302.service. Dec 13 14:29:38.231651 sshd[4397]: Accepted publickey for core from 139.178.89.65 port 50302 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:38.236595 sshd[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:38.251033 systemd-logind[1813]: New session 10 of user core. Dec 13 14:29:38.251571 systemd[1]: Started session-10.scope. Dec 13 14:29:38.485892 sshd[4397]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:38.489592 systemd[1]: sshd@9-172.31.23.203:22-139.178.89.65:50302.service: Deactivated successfully. Dec 13 14:29:38.491521 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:29:38.492046 systemd-logind[1813]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:29:38.496912 systemd-logind[1813]: Removed session 10. Dec 13 14:29:43.510451 systemd[1]: Started sshd@10-172.31.23.203:22-139.178.89.65:50310.service. Dec 13 14:29:43.674499 sshd[4413]: Accepted publickey for core from 139.178.89.65 port 50310 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:43.676508 sshd[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:43.683261 systemd[1]: Started session-11.scope. Dec 13 14:29:43.684718 systemd-logind[1813]: New session 11 of user core. Dec 13 14:29:43.895157 sshd[4413]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:43.899138 systemd[1]: sshd@10-172.31.23.203:22-139.178.89.65:50310.service: Deactivated successfully. Dec 13 14:29:43.900450 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:29:43.901353 systemd-logind[1813]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:29:43.903029 systemd-logind[1813]: Removed session 11. Dec 13 14:29:43.919830 systemd[1]: Started sshd@11-172.31.23.203:22-139.178.89.65:50314.service. Dec 13 14:29:44.085168 sshd[4427]: Accepted publickey for core from 139.178.89.65 port 50314 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:44.086900 sshd[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:44.093563 systemd[1]: Started session-12.scope. Dec 13 14:29:44.094172 systemd-logind[1813]: New session 12 of user core. Dec 13 14:29:44.397900 sshd[4427]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:44.403713 systemd[1]: sshd@11-172.31.23.203:22-139.178.89.65:50314.service: Deactivated successfully. Dec 13 14:29:44.404863 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:29:44.405745 systemd-logind[1813]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:29:44.407802 systemd-logind[1813]: Removed session 12. Dec 13 14:29:44.425084 systemd[1]: Started sshd@12-172.31.23.203:22-139.178.89.65:50316.service. Dec 13 14:29:44.624939 sshd[4437]: Accepted publickey for core from 139.178.89.65 port 50316 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:44.627064 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:44.633642 systemd[1]: Started session-13.scope. Dec 13 14:29:44.634162 systemd-logind[1813]: New session 13 of user core. Dec 13 14:29:44.889012 sshd[4437]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:44.893749 systemd[1]: sshd@12-172.31.23.203:22-139.178.89.65:50316.service: Deactivated successfully. Dec 13 14:29:44.896418 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:29:44.897663 systemd-logind[1813]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:29:44.901556 systemd-logind[1813]: Removed session 13. Dec 13 14:29:49.916030 systemd[1]: Started sshd@13-172.31.23.203:22-139.178.89.65:57206.service. Dec 13 14:29:50.099853 sshd[4449]: Accepted publickey for core from 139.178.89.65 port 57206 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:50.103600 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:50.121594 systemd[1]: Started session-14.scope. Dec 13 14:29:50.123879 systemd-logind[1813]: New session 14 of user core. Dec 13 14:29:50.337618 sshd[4449]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:50.343720 systemd[1]: sshd@13-172.31.23.203:22-139.178.89.65:57206.service: Deactivated successfully. Dec 13 14:29:50.348763 systemd-logind[1813]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:29:50.348854 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:29:50.352428 systemd-logind[1813]: Removed session 14. Dec 13 14:29:55.364726 systemd[1]: Started sshd@14-172.31.23.203:22-139.178.89.65:57216.service. Dec 13 14:29:55.579760 sshd[4462]: Accepted publickey for core from 139.178.89.65 port 57216 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:29:55.582607 sshd[4462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:29:55.598596 systemd[1]: Started session-15.scope. Dec 13 14:29:55.599895 systemd-logind[1813]: New session 15 of user core. Dec 13 14:29:55.865929 sshd[4462]: pam_unix(sshd:session): session closed for user core Dec 13 14:29:55.872606 systemd[1]: sshd@14-172.31.23.203:22-139.178.89.65:57216.service: Deactivated successfully. Dec 13 14:29:55.874551 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:29:55.875284 systemd-logind[1813]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:29:55.876621 systemd-logind[1813]: Removed session 15. Dec 13 14:30:00.893159 systemd[1]: Started sshd@15-172.31.23.203:22-139.178.89.65:57298.service. Dec 13 14:30:01.102494 sshd[4475]: Accepted publickey for core from 139.178.89.65 port 57298 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:01.104849 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:01.145132 systemd[1]: Started session-16.scope. Dec 13 14:30:01.146918 systemd-logind[1813]: New session 16 of user core. Dec 13 14:30:01.681840 sshd[4475]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:01.726256 systemd-logind[1813]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:30:01.739164 systemd[1]: Started sshd@16-172.31.23.203:22-139.178.89.65:57304.service. Dec 13 14:30:01.749044 systemd[1]: sshd@15-172.31.23.203:22-139.178.89.65:57298.service: Deactivated successfully. Dec 13 14:30:01.766804 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:30:01.767657 systemd-logind[1813]: Removed session 16. Dec 13 14:30:02.103168 sshd[4487]: Accepted publickey for core from 139.178.89.65 port 57304 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:02.104590 sshd[4487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:02.133853 systemd[1]: Started session-17.scope. Dec 13 14:30:02.134248 systemd-logind[1813]: New session 17 of user core. Dec 13 14:30:03.736173 sshd[4487]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:03.743052 systemd[1]: sshd@16-172.31.23.203:22-139.178.89.65:57304.service: Deactivated successfully. Dec 13 14:30:03.745867 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:30:03.745867 systemd-logind[1813]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:30:03.756759 systemd-logind[1813]: Removed session 17. Dec 13 14:30:03.766201 systemd[1]: Started sshd@17-172.31.23.203:22-139.178.89.65:57314.service. Dec 13 14:30:03.981965 sshd[4499]: Accepted publickey for core from 139.178.89.65 port 57314 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:03.984698 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:04.011854 systemd[1]: Started session-18.scope. Dec 13 14:30:04.015412 systemd-logind[1813]: New session 18 of user core. Dec 13 14:30:06.941815 sshd[4499]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:06.990039 systemd[1]: Started sshd@18-172.31.23.203:22-139.178.89.65:57324.service. Dec 13 14:30:07.000189 systemd[1]: sshd@17-172.31.23.203:22-139.178.89.65:57314.service: Deactivated successfully. Dec 13 14:30:07.011492 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:30:07.012539 systemd-logind[1813]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:30:07.024831 systemd-logind[1813]: Removed session 18. Dec 13 14:30:07.201655 sshd[4516]: Accepted publickey for core from 139.178.89.65 port 57324 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:07.204137 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:07.218921 systemd[1]: Started session-19.scope. Dec 13 14:30:07.219908 systemd-logind[1813]: New session 19 of user core. Dec 13 14:30:07.893119 sshd[4516]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:07.898202 systemd-logind[1813]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:30:07.900723 systemd[1]: sshd@18-172.31.23.203:22-139.178.89.65:57324.service: Deactivated successfully. Dec 13 14:30:07.903664 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:30:07.905124 systemd-logind[1813]: Removed session 19. Dec 13 14:30:07.917602 systemd[1]: Started sshd@19-172.31.23.203:22-139.178.89.65:57336.service. Dec 13 14:30:08.085460 sshd[4527]: Accepted publickey for core from 139.178.89.65 port 57336 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:08.087184 sshd[4527]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:08.100193 systemd-logind[1813]: New session 20 of user core. Dec 13 14:30:08.100924 systemd[1]: Started session-20.scope. Dec 13 14:30:08.360854 sshd[4527]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:08.368292 systemd[1]: sshd@19-172.31.23.203:22-139.178.89.65:57336.service: Deactivated successfully. Dec 13 14:30:08.370640 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:30:08.371172 systemd-logind[1813]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:30:08.376882 systemd-logind[1813]: Removed session 20. Dec 13 14:30:13.384336 systemd[1]: Started sshd@20-172.31.23.203:22-139.178.89.65:44294.service. Dec 13 14:30:13.558288 sshd[4541]: Accepted publickey for core from 139.178.89.65 port 44294 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:13.560048 sshd[4541]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:13.573093 systemd[1]: Started session-21.scope. Dec 13 14:30:13.576362 systemd-logind[1813]: New session 21 of user core. Dec 13 14:30:13.819351 sshd[4541]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:13.827438 systemd[1]: sshd@20-172.31.23.203:22-139.178.89.65:44294.service: Deactivated successfully. Dec 13 14:30:13.829710 systemd-logind[1813]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:30:13.836580 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:30:13.843232 systemd-logind[1813]: Removed session 21. Dec 13 14:30:18.845391 systemd[1]: Started sshd@21-172.31.23.203:22-139.178.89.65:44244.service. Dec 13 14:30:19.015458 sshd[4558]: Accepted publickey for core from 139.178.89.65 port 44244 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:19.017459 sshd[4558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:19.023623 systemd[1]: Started session-22.scope. Dec 13 14:30:19.024703 systemd-logind[1813]: New session 22 of user core. Dec 13 14:30:19.240203 sshd[4558]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:19.243958 systemd[1]: sshd@21-172.31.23.203:22-139.178.89.65:44244.service: Deactivated successfully. Dec 13 14:30:19.245384 systemd-logind[1813]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:30:19.245485 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:30:19.248587 systemd-logind[1813]: Removed session 22. Dec 13 14:30:24.266302 systemd[1]: Started sshd@22-172.31.23.203:22-139.178.89.65:44260.service. Dec 13 14:30:24.432021 sshd[4571]: Accepted publickey for core from 139.178.89.65 port 44260 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:24.434335 sshd[4571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:24.440519 systemd[1]: Started session-23.scope. Dec 13 14:30:24.440961 systemd-logind[1813]: New session 23 of user core. Dec 13 14:30:24.635726 sshd[4571]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:24.639187 systemd[1]: sshd@22-172.31.23.203:22-139.178.89.65:44260.service: Deactivated successfully. Dec 13 14:30:24.640597 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:30:24.641173 systemd-logind[1813]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:30:24.642359 systemd-logind[1813]: Removed session 23. Dec 13 14:30:29.661398 systemd[1]: Started sshd@23-172.31.23.203:22-139.178.89.65:45988.service. Dec 13 14:30:29.874924 sshd[4586]: Accepted publickey for core from 139.178.89.65 port 45988 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:29.879468 sshd[4586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:29.907818 systemd[1]: Started session-24.scope. Dec 13 14:30:29.908061 systemd-logind[1813]: New session 24 of user core. Dec 13 14:30:30.139906 sshd[4586]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:30.145197 systemd[1]: sshd@23-172.31.23.203:22-139.178.89.65:45988.service: Deactivated successfully. Dec 13 14:30:30.147539 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:30:30.148307 systemd-logind[1813]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:30:30.150389 systemd-logind[1813]: Removed session 24. Dec 13 14:30:30.172835 systemd[1]: Started sshd@24-172.31.23.203:22-139.178.89.65:46002.service. Dec 13 14:30:30.340607 sshd[4599]: Accepted publickey for core from 139.178.89.65 port 46002 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:30.343117 sshd[4599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:30.351096 systemd[1]: Started session-25.scope. Dec 13 14:30:30.352189 systemd-logind[1813]: New session 25 of user core. Dec 13 14:30:32.725749 systemd[1]: run-containerd-runc-k8s.io-22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72-runc.THdEXy.mount: Deactivated successfully. Dec 13 14:30:32.744101 env[1828]: time="2024-12-13T14:30:32.741969437Z" level=info msg="StopContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" with timeout 30 (s)" Dec 13 14:30:32.746336 env[1828]: time="2024-12-13T14:30:32.746289081Z" level=info msg="Stop container \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" with signal terminated" Dec 13 14:30:32.787383 env[1828]: time="2024-12-13T14:30:32.787219006Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:30:32.796932 env[1828]: time="2024-12-13T14:30:32.796881776Z" level=info msg="StopContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" with timeout 2 (s)" Dec 13 14:30:32.797463 env[1828]: time="2024-12-13T14:30:32.797432999Z" level=info msg="Stop container \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" with signal terminated" Dec 13 14:30:32.802883 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f-rootfs.mount: Deactivated successfully. Dec 13 14:30:32.814115 systemd-networkd[1507]: lxc_health: Link DOWN Dec 13 14:30:32.814123 systemd-networkd[1507]: lxc_health: Lost carrier Dec 13 14:30:32.949944 env[1828]: time="2024-12-13T14:30:32.947426040Z" level=info msg="shim disconnected" id=29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f Dec 13 14:30:32.949944 env[1828]: time="2024-12-13T14:30:32.947487892Z" level=warning msg="cleaning up after shim disconnected" id=29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f namespace=k8s.io Dec 13 14:30:32.949944 env[1828]: time="2024-12-13T14:30:32.947499523Z" level=info msg="cleaning up dead shim" Dec 13 14:30:32.963903 env[1828]: time="2024-12-13T14:30:32.963858723Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4656 runtime=io.containerd.runc.v2\n" Dec 13 14:30:32.969731 env[1828]: time="2024-12-13T14:30:32.969683755Z" level=info msg="StopContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" returns successfully" Dec 13 14:30:32.970401 env[1828]: time="2024-12-13T14:30:32.970363500Z" level=info msg="StopPodSandbox for \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\"" Dec 13 14:30:32.970529 env[1828]: time="2024-12-13T14:30:32.970450628Z" level=info msg="Container to stop \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:32.973569 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635-shm.mount: Deactivated successfully. Dec 13 14:30:33.002600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72-rootfs.mount: Deactivated successfully. Dec 13 14:30:33.021878 env[1828]: time="2024-12-13T14:30:33.021823313Z" level=info msg="shim disconnected" id=22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72 Dec 13 14:30:33.021878 env[1828]: time="2024-12-13T14:30:33.021875743Z" level=warning msg="cleaning up after shim disconnected" id=22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72 namespace=k8s.io Dec 13 14:30:33.022321 env[1828]: time="2024-12-13T14:30:33.021888003Z" level=info msg="cleaning up dead shim" Dec 13 14:30:33.045529 env[1828]: time="2024-12-13T14:30:33.045464050Z" level=info msg="shim disconnected" id=e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635 Dec 13 14:30:33.045529 env[1828]: time="2024-12-13T14:30:33.045526876Z" level=warning msg="cleaning up after shim disconnected" id=e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635 namespace=k8s.io Dec 13 14:30:33.045529 env[1828]: time="2024-12-13T14:30:33.045539987Z" level=info msg="cleaning up dead shim" Dec 13 14:30:33.053308 env[1828]: time="2024-12-13T14:30:33.053232606Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4703 runtime=io.containerd.runc.v2\n" Dec 13 14:30:33.057820 env[1828]: time="2024-12-13T14:30:33.057763678Z" level=info msg="StopContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" returns successfully" Dec 13 14:30:33.058617 env[1828]: time="2024-12-13T14:30:33.058514842Z" level=info msg="StopPodSandbox for \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\"" Dec 13 14:30:33.058752 env[1828]: time="2024-12-13T14:30:33.058661071Z" level=info msg="Container to stop \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:33.058752 env[1828]: time="2024-12-13T14:30:33.058684589Z" level=info msg="Container to stop \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:33.058752 env[1828]: time="2024-12-13T14:30:33.058701040Z" level=info msg="Container to stop \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:33.058752 env[1828]: time="2024-12-13T14:30:33.058721090Z" level=info msg="Container to stop \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:33.058752 env[1828]: time="2024-12-13T14:30:33.058735631Z" level=info msg="Container to stop \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:33.065833 env[1828]: time="2024-12-13T14:30:33.065782493Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4716 runtime=io.containerd.runc.v2\n" Dec 13 14:30:33.066292 env[1828]: time="2024-12-13T14:30:33.066187169Z" level=info msg="TearDown network for sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" successfully" Dec 13 14:30:33.066390 env[1828]: time="2024-12-13T14:30:33.066293487Z" level=info msg="StopPodSandbox for \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" returns successfully" Dec 13 14:30:33.113809 env[1828]: time="2024-12-13T14:30:33.113759776Z" level=info msg="shim disconnected" id=af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4 Dec 13 14:30:33.114290 env[1828]: time="2024-12-13T14:30:33.114074484Z" level=warning msg="cleaning up after shim disconnected" id=af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4 namespace=k8s.io Dec 13 14:30:33.114290 env[1828]: time="2024-12-13T14:30:33.114095049Z" level=info msg="cleaning up dead shim" Dec 13 14:30:33.125039 env[1828]: time="2024-12-13T14:30:33.124892563Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4752 runtime=io.containerd.runc.v2\n" Dec 13 14:30:33.125381 env[1828]: time="2024-12-13T14:30:33.125351250Z" level=info msg="TearDown network for sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" successfully" Dec 13 14:30:33.125472 env[1828]: time="2024-12-13T14:30:33.125378987Z" level=info msg="StopPodSandbox for \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" returns successfully" Dec 13 14:30:33.160346 kubelet[3022]: I1213 14:30:33.160312 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtfx5\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-kube-api-access-jtfx5\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.160346 kubelet[3022]: I1213 14:30:33.160358 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cni-path\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160383 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-lib-modules\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160407 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-etc-cni-netd\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160432 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-kernel\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160465 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de73ea01-7505-4165-a4f5-18a5c3f70754-cilium-config-path\") pod \"de73ea01-7505-4165-a4f5-18a5c3f70754\" (UID: \"de73ea01-7505-4165-a4f5-18a5c3f70754\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160496 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5a2ab86-e5b8-47b0-9f77-5077add6b195-clustermesh-secrets\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161024 kubelet[3022]: I1213 14:30:33.160608 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-xtables-lock\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160640 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hostproc\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160671 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hubble-tls\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160727 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-config-path\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160755 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-cgroup\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160782 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-bpf-maps\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161254 kubelet[3022]: I1213 14:30:33.160809 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-net\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161525 kubelet[3022]: I1213 14:30:33.160834 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-run\") pod \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\" (UID: \"b5a2ab86-e5b8-47b0-9f77-5077add6b195\") " Dec 13 14:30:33.161525 kubelet[3022]: I1213 14:30:33.160859 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxl95\" (UniqueName: \"kubernetes.io/projected/de73ea01-7505-4165-a4f5-18a5c3f70754-kube-api-access-dxl95\") pod \"de73ea01-7505-4165-a4f5-18a5c3f70754\" (UID: \"de73ea01-7505-4165-a4f5-18a5c3f70754\") " Dec 13 14:30:33.163226 kubelet[3022]: I1213 14:30:33.162034 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.163352 kubelet[3022]: I1213 14:30:33.163276 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.163352 kubelet[3022]: I1213 14:30:33.163305 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.163352 kubelet[3022]: I1213 14:30:33.163326 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.163672 kubelet[3022]: I1213 14:30:33.163648 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hostproc" (OuterVolumeSpecName: "hostproc") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.163741 kubelet[3022]: I1213 14:30:33.163677 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cni-path" (OuterVolumeSpecName: "cni-path") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.164922 kubelet[3022]: I1213 14:30:33.161599 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.165240 kubelet[3022]: I1213 14:30:33.164946 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.165240 kubelet[3022]: I1213 14:30:33.164973 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.165240 kubelet[3022]: I1213 14:30:33.165111 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:33.171275 kubelet[3022]: I1213 14:30:33.171232 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de73ea01-7505-4165-a4f5-18a5c3f70754-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de73ea01-7505-4165-a4f5-18a5c3f70754" (UID: "de73ea01-7505-4165-a4f5-18a5c3f70754"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:30:33.174114 kubelet[3022]: I1213 14:30:33.174066 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:30:33.174307 kubelet[3022]: I1213 14:30:33.174242 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:33.177552 kubelet[3022]: I1213 14:30:33.177518 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5a2ab86-e5b8-47b0-9f77-5077add6b195-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:33.178403 kubelet[3022]: I1213 14:30:33.178367 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-kube-api-access-jtfx5" (OuterVolumeSpecName: "kube-api-access-jtfx5") pod "b5a2ab86-e5b8-47b0-9f77-5077add6b195" (UID: "b5a2ab86-e5b8-47b0-9f77-5077add6b195"). InnerVolumeSpecName "kube-api-access-jtfx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:33.180924 kubelet[3022]: I1213 14:30:33.180891 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de73ea01-7505-4165-a4f5-18a5c3f70754-kube-api-access-dxl95" (OuterVolumeSpecName: "kube-api-access-dxl95") pod "de73ea01-7505-4165-a4f5-18a5c3f70754" (UID: "de73ea01-7505-4165-a4f5-18a5c3f70754"). InnerVolumeSpecName "kube-api-access-dxl95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:33.261224 kubelet[3022]: I1213 14:30:33.261116 3022 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-kernel\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.261665 kubelet[3022]: I1213 14:30:33.261597 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de73ea01-7505-4165-a4f5-18a5c3f70754-cilium-config-path\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.261803 kubelet[3022]: I1213 14:30:33.261787 3022 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b5a2ab86-e5b8-47b0-9f77-5077add6b195-clustermesh-secrets\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.261884 kubelet[3022]: I1213 14:30:33.261876 3022 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hostproc\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.261963 kubelet[3022]: I1213 14:30:33.261956 3022 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-hubble-tls\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.262062 kubelet[3022]: I1213 14:30:33.262054 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-config-path\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.262150 kubelet[3022]: I1213 14:30:33.262143 3022 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-xtables-lock\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.262225 kubelet[3022]: I1213 14:30:33.262217 3022 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-bpf-maps\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.262301 kubelet[3022]: I1213 14:30:33.262294 3022 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-host-proc-sys-net\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.262373 kubelet[3022]: I1213 14:30:33.262366 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-run\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262504 3022 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-dxl95\" (UniqueName: \"kubernetes.io/projected/de73ea01-7505-4165-a4f5-18a5c3f70754-kube-api-access-dxl95\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262524 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cilium-cgroup\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262543 3022 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-cni-path\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262589 3022 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-lib-modules\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262606 3022 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jtfx5\" (UniqueName: \"kubernetes.io/projected/b5a2ab86-e5b8-47b0-9f77-5077add6b195-kube-api-access-jtfx5\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.263112 kubelet[3022]: I1213 14:30:33.262621 3022 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b5a2ab86-e5b8-47b0-9f77-5077add6b195-etc-cni-netd\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:33.528800 kubelet[3022]: I1213 14:30:33.528672 3022 scope.go:117] "RemoveContainer" containerID="29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f" Dec 13 14:30:33.544020 env[1828]: time="2024-12-13T14:30:33.543929170Z" level=info msg="RemoveContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\"" Dec 13 14:30:33.564094 env[1828]: time="2024-12-13T14:30:33.564046890Z" level=info msg="RemoveContainer for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" returns successfully" Dec 13 14:30:33.566383 kubelet[3022]: I1213 14:30:33.566357 3022 scope.go:117] "RemoveContainer" containerID="29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f" Dec 13 14:30:33.567739 env[1828]: time="2024-12-13T14:30:33.567650366Z" level=error msg="ContainerStatus for \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\": not found" Dec 13 14:30:33.572431 kubelet[3022]: E1213 14:30:33.572394 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\": not found" containerID="29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f" Dec 13 14:30:33.587206 kubelet[3022]: I1213 14:30:33.587175 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f"} err="failed to get container status \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\": rpc error: code = NotFound desc = an error occurred when try to find container \"29d64138db3021193dfacae08e5f50e8751337ae0054fc9053b1a45cef27040f\": not found" Dec 13 14:30:33.587459 kubelet[3022]: I1213 14:30:33.587442 3022 scope.go:117] "RemoveContainer" containerID="22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72" Dec 13 14:30:33.589790 env[1828]: time="2024-12-13T14:30:33.589438734Z" level=info msg="RemoveContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\"" Dec 13 14:30:33.598058 env[1828]: time="2024-12-13T14:30:33.598010785Z" level=info msg="RemoveContainer for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" returns successfully" Dec 13 14:30:33.598388 kubelet[3022]: I1213 14:30:33.598346 3022 scope.go:117] "RemoveContainer" containerID="2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30" Dec 13 14:30:33.602069 env[1828]: time="2024-12-13T14:30:33.602032547Z" level=info msg="RemoveContainer for \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\"" Dec 13 14:30:33.610893 env[1828]: time="2024-12-13T14:30:33.610846328Z" level=info msg="RemoveContainer for \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\" returns successfully" Dec 13 14:30:33.611146 kubelet[3022]: I1213 14:30:33.611120 3022 scope.go:117] "RemoveContainer" containerID="4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a" Dec 13 14:30:33.612409 env[1828]: time="2024-12-13T14:30:33.612375111Z" level=info msg="RemoveContainer for \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\"" Dec 13 14:30:33.618310 env[1828]: time="2024-12-13T14:30:33.618265224Z" level=info msg="RemoveContainer for \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\" returns successfully" Dec 13 14:30:33.618617 kubelet[3022]: I1213 14:30:33.618589 3022 scope.go:117] "RemoveContainer" containerID="ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1" Dec 13 14:30:33.619926 env[1828]: time="2024-12-13T14:30:33.619883167Z" level=info msg="RemoveContainer for \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\"" Dec 13 14:30:33.630551 env[1828]: time="2024-12-13T14:30:33.630501145Z" level=info msg="RemoveContainer for \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\" returns successfully" Dec 13 14:30:33.632263 kubelet[3022]: I1213 14:30:33.632225 3022 scope.go:117] "RemoveContainer" containerID="947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e" Dec 13 14:30:33.633881 env[1828]: time="2024-12-13T14:30:33.633847449Z" level=info msg="RemoveContainer for \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\"" Dec 13 14:30:33.639869 env[1828]: time="2024-12-13T14:30:33.639821566Z" level=info msg="RemoveContainer for \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\" returns successfully" Dec 13 14:30:33.640254 kubelet[3022]: I1213 14:30:33.640225 3022 scope.go:117] "RemoveContainer" containerID="22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72" Dec 13 14:30:33.640659 env[1828]: time="2024-12-13T14:30:33.640591125Z" level=error msg="ContainerStatus for \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\": not found" Dec 13 14:30:33.640846 kubelet[3022]: E1213 14:30:33.640830 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\": not found" containerID="22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72" Dec 13 14:30:33.640956 kubelet[3022]: I1213 14:30:33.640939 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72"} err="failed to get container status \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\": rpc error: code = NotFound desc = an error occurred when try to find container \"22866a7bc83f3387285696d9473f201fc35086c2a4f9de11ff78f2186088ef72\": not found" Dec 13 14:30:33.641057 kubelet[3022]: I1213 14:30:33.640959 3022 scope.go:117] "RemoveContainer" containerID="2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30" Dec 13 14:30:33.641290 env[1828]: time="2024-12-13T14:30:33.641213910Z" level=error msg="ContainerStatus for \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\": not found" Dec 13 14:30:33.641479 kubelet[3022]: E1213 14:30:33.641454 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\": not found" containerID="2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30" Dec 13 14:30:33.641547 kubelet[3022]: I1213 14:30:33.641497 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30"} err="failed to get container status \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a4fbed9b13a8bf236e80188dc8e1082583159ae808120b012994b97ca23bd30\": not found" Dec 13 14:30:33.641547 kubelet[3022]: I1213 14:30:33.641510 3022 scope.go:117] "RemoveContainer" containerID="4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a" Dec 13 14:30:33.641754 env[1828]: time="2024-12-13T14:30:33.641705978Z" level=error msg="ContainerStatus for \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\": not found" Dec 13 14:30:33.641866 kubelet[3022]: E1213 14:30:33.641845 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\": not found" containerID="4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a" Dec 13 14:30:33.641936 kubelet[3022]: I1213 14:30:33.641882 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a"} err="failed to get container status \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ccb926979084c501008f611deca1c34c7c2f1c11bafd730f67024e685b9303a\": not found" Dec 13 14:30:33.641936 kubelet[3022]: I1213 14:30:33.641898 3022 scope.go:117] "RemoveContainer" containerID="ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1" Dec 13 14:30:33.642143 env[1828]: time="2024-12-13T14:30:33.642095841Z" level=error msg="ContainerStatus for \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\": not found" Dec 13 14:30:33.642321 kubelet[3022]: E1213 14:30:33.642277 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\": not found" containerID="ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1" Dec 13 14:30:33.642321 kubelet[3022]: I1213 14:30:33.642309 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1"} err="failed to get container status \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca191a1acb3823992f8bd1b4c2b3efb6d9baef0bb4d2dfc53a7245f0aa4f55b1\": not found" Dec 13 14:30:33.642321 kubelet[3022]: I1213 14:30:33.642322 3022 scope.go:117] "RemoveContainer" containerID="947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e" Dec 13 14:30:33.642665 env[1828]: time="2024-12-13T14:30:33.642615280Z" level=error msg="ContainerStatus for \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\": not found" Dec 13 14:30:33.642828 kubelet[3022]: E1213 14:30:33.642800 3022 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\": not found" containerID="947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e" Dec 13 14:30:33.642895 kubelet[3022]: I1213 14:30:33.642846 3022 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e"} err="failed to get container status \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"947a1643f5065771cb097ba69c2a758448dabbd48c3156ea70bd68b48d109c4e\": not found" Dec 13 14:30:33.712208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635-rootfs.mount: Deactivated successfully. Dec 13 14:30:33.712393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4-rootfs.mount: Deactivated successfully. Dec 13 14:30:33.712526 systemd[1]: var-lib-kubelet-pods-de73ea01\x2d7505\x2d4165\x2da4f5\x2d18a5c3f70754-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddxl95.mount: Deactivated successfully. Dec 13 14:30:33.712661 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4-shm.mount: Deactivated successfully. Dec 13 14:30:33.712776 systemd[1]: var-lib-kubelet-pods-b5a2ab86\x2de5b8\x2d47b0\x2d9f77\x2d5077add6b195-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djtfx5.mount: Deactivated successfully. Dec 13 14:30:33.712902 systemd[1]: var-lib-kubelet-pods-b5a2ab86\x2de5b8\x2d47b0\x2d9f77\x2d5077add6b195-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:33.713057 systemd[1]: var-lib-kubelet-pods-b5a2ab86\x2de5b8\x2d47b0\x2d9f77\x2d5077add6b195-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:30:34.545043 sshd[4599]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:34.552633 systemd[1]: sshd@24-172.31.23.203:22-139.178.89.65:46002.service: Deactivated successfully. Dec 13 14:30:34.558377 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:30:34.559423 systemd-logind[1813]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:30:34.564554 systemd-logind[1813]: Removed session 25. Dec 13 14:30:34.570918 systemd[1]: Started sshd@25-172.31.23.203:22-139.178.89.65:46008.service. Dec 13 14:30:34.774574 sshd[4772]: Accepted publickey for core from 139.178.89.65 port 46008 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:34.776325 sshd[4772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:34.785126 systemd-logind[1813]: New session 26 of user core. Dec 13 14:30:34.786526 systemd[1]: Started session-26.scope. Dec 13 14:30:34.995824 kubelet[3022]: I1213 14:30:34.995787 3022 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" path="/var/lib/kubelet/pods/b5a2ab86-e5b8-47b0-9f77-5077add6b195/volumes" Dec 13 14:30:34.999815 kubelet[3022]: I1213 14:30:34.999779 3022 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="de73ea01-7505-4165-a4f5-18a5c3f70754" path="/var/lib/kubelet/pods/de73ea01-7505-4165-a4f5-18a5c3f70754/volumes" Dec 13 14:30:36.004095 sshd[4772]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:36.012507 systemd-logind[1813]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:30:36.016242 systemd[1]: sshd@25-172.31.23.203:22-139.178.89.65:46008.service: Deactivated successfully. Dec 13 14:30:36.017685 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:30:36.026230 systemd-logind[1813]: Removed session 26. Dec 13 14:30:36.038191 kubelet[3022]: I1213 14:30:36.038159 3022 topology_manager.go:215] "Topology Admit Handler" podUID="56376088-006a-4c71-b761-928e73197b5e" podNamespace="kube-system" podName="cilium-wln2m" Dec 13 14:30:36.039106 systemd[1]: Started sshd@26-172.31.23.203:22-139.178.89.65:46020.service. Dec 13 14:30:36.059768 kubelet[3022]: E1213 14:30:36.059727 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="mount-bpf-fs" Dec 13 14:30:36.060048 kubelet[3022]: E1213 14:30:36.060030 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="clean-cilium-state" Dec 13 14:30:36.060180 kubelet[3022]: E1213 14:30:36.060170 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="de73ea01-7505-4165-a4f5-18a5c3f70754" containerName="cilium-operator" Dec 13 14:30:36.060345 kubelet[3022]: E1213 14:30:36.060254 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="cilium-agent" Dec 13 14:30:36.060450 kubelet[3022]: E1213 14:30:36.060440 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="mount-cgroup" Dec 13 14:30:36.060519 kubelet[3022]: E1213 14:30:36.060511 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="apply-sysctl-overwrites" Dec 13 14:30:36.071419 kubelet[3022]: I1213 14:30:36.071384 3022 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5a2ab86-e5b8-47b0-9f77-5077add6b195" containerName="cilium-agent" Dec 13 14:30:36.071610 kubelet[3022]: I1213 14:30:36.071594 3022 memory_manager.go:354] "RemoveStaleState removing state" podUID="de73ea01-7505-4165-a4f5-18a5c3f70754" containerName="cilium-operator" Dec 13 14:30:36.191209 kubelet[3022]: I1213 14:30:36.191175 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-etc-cni-netd\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191228 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-xtables-lock\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191256 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-hubble-tls\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191282 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-kernel\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191896 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-cilium-ipsec-secrets\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191954 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mjcv\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-kube-api-access-2mjcv\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192025 kubelet[3022]: I1213 14:30:36.191983 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-run\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192030 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-bpf-maps\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192089 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cni-path\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192205 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-hostproc\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192273 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56376088-006a-4c71-b761-928e73197b5e-cilium-config-path\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192305 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-net\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.192510 kubelet[3022]: I1213 14:30:36.192337 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-cgroup\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.193626 kubelet[3022]: I1213 14:30:36.192368 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-lib-modules\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.193626 kubelet[3022]: I1213 14:30:36.192398 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-clustermesh-secrets\") pod \"cilium-wln2m\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " pod="kube-system/cilium-wln2m" Dec 13 14:30:36.271593 sshd[4783]: Accepted publickey for core from 139.178.89.65 port 46020 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:36.273060 sshd[4783]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:36.281605 systemd[1]: Started session-27.scope. Dec 13 14:30:36.282301 systemd-logind[1813]: New session 27 of user core. Dec 13 14:30:36.401499 env[1828]: time="2024-12-13T14:30:36.401447623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wln2m,Uid:56376088-006a-4c71-b761-928e73197b5e,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:36.449351 env[1828]: time="2024-12-13T14:30:36.449246163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:36.449351 env[1828]: time="2024-12-13T14:30:36.449294122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:36.449351 env[1828]: time="2024-12-13T14:30:36.449309060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:36.449813 env[1828]: time="2024-12-13T14:30:36.449759274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c pid=4805 runtime=io.containerd.runc.v2 Dec 13 14:30:36.514269 env[1828]: time="2024-12-13T14:30:36.514223998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wln2m,Uid:56376088-006a-4c71-b761-928e73197b5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\"" Dec 13 14:30:36.519498 env[1828]: time="2024-12-13T14:30:36.518784141Z" level=info msg="CreateContainer within sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:30:36.563091 env[1828]: time="2024-12-13T14:30:36.552318021Z" level=info msg="CreateContainer within sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\"" Dec 13 14:30:36.563091 env[1828]: time="2024-12-13T14:30:36.557168751Z" level=info msg="StartContainer for \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\"" Dec 13 14:30:36.655607 sshd[4783]: pam_unix(sshd:session): session closed for user core Dec 13 14:30:36.667631 systemd-logind[1813]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:30:36.670498 systemd[1]: sshd@26-172.31.23.203:22-139.178.89.65:46020.service: Deactivated successfully. Dec 13 14:30:36.675667 systemd[1]: Started sshd@27-172.31.23.203:22-139.178.89.65:46032.service. Dec 13 14:30:36.678747 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:30:36.696408 systemd-logind[1813]: Removed session 27. Dec 13 14:30:36.731243 env[1828]: time="2024-12-13T14:30:36.731155331Z" level=info msg="StartContainer for \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\" returns successfully" Dec 13 14:30:36.812797 env[1828]: time="2024-12-13T14:30:36.812742943Z" level=info msg="shim disconnected" id=29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1 Dec 13 14:30:36.812797 env[1828]: time="2024-12-13T14:30:36.812796000Z" level=warning msg="cleaning up after shim disconnected" id=29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1 namespace=k8s.io Dec 13 14:30:36.813458 env[1828]: time="2024-12-13T14:30:36.812807859Z" level=info msg="cleaning up dead shim" Dec 13 14:30:36.830500 env[1828]: time="2024-12-13T14:30:36.830438026Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4893 runtime=io.containerd.runc.v2\n" Dec 13 14:30:36.876302 sshd[4866]: Accepted publickey for core from 139.178.89.65 port 46032 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:30:36.878252 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:30:36.886116 systemd[1]: Started session-28.scope. Dec 13 14:30:36.886968 systemd-logind[1813]: New session 28 of user core. Dec 13 14:30:37.241167 kubelet[3022]: E1213 14:30:37.241124 3022 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:30:37.599503 env[1828]: time="2024-12-13T14:30:37.599391956Z" level=info msg="StopPodSandbox for \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\"" Dec 13 14:30:37.600894 env[1828]: time="2024-12-13T14:30:37.600846836Z" level=info msg="Container to stop \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:30:37.606636 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c-shm.mount: Deactivated successfully. Dec 13 14:30:37.676082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c-rootfs.mount: Deactivated successfully. Dec 13 14:30:37.697798 env[1828]: time="2024-12-13T14:30:37.697045269Z" level=info msg="shim disconnected" id=607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c Dec 13 14:30:37.698772 env[1828]: time="2024-12-13T14:30:37.697800421Z" level=warning msg="cleaning up after shim disconnected" id=607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c namespace=k8s.io Dec 13 14:30:37.698772 env[1828]: time="2024-12-13T14:30:37.698034171Z" level=info msg="cleaning up dead shim" Dec 13 14:30:37.714850 env[1828]: time="2024-12-13T14:30:37.714795505Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4935 runtime=io.containerd.runc.v2\n" Dec 13 14:30:37.715181 env[1828]: time="2024-12-13T14:30:37.715145676Z" level=info msg="TearDown network for sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" successfully" Dec 13 14:30:37.715272 env[1828]: time="2024-12-13T14:30:37.715179375Z" level=info msg="StopPodSandbox for \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" returns successfully" Dec 13 14:30:37.817628 kubelet[3022]: I1213 14:30:37.816675 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-lib-modules\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.817628 kubelet[3022]: I1213 14:30:37.816730 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-kernel\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.817628 kubelet[3022]: I1213 14:30:37.816759 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-run\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.817628 kubelet[3022]: I1213 14:30:37.816753 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.817628 kubelet[3022]: I1213 14:30:37.816792 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-clustermesh-secrets\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.819141 kubelet[3022]: I1213 14:30:37.816812 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.819141 kubelet[3022]: I1213 14:30:37.816824 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-cilium-ipsec-secrets\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.819141 kubelet[3022]: I1213 14:30:37.816853 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-etc-cni-netd\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.819141 kubelet[3022]: I1213 14:30:37.816838 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.819141 kubelet[3022]: I1213 14:30:37.816883 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-hubble-tls\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.816908 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-bpf-maps\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.816955 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cni-path\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.816981 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-net\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.817032 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-hostproc\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.817058 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-xtables-lock\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820071 kubelet[3022]: I1213 14:30:37.817088 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mjcv\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-kube-api-access-2mjcv\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817133 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56376088-006a-4c71-b761-928e73197b5e-cilium-config-path\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817176 3022 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-cgroup\") pod \"56376088-006a-4c71-b761-928e73197b5e\" (UID: \"56376088-006a-4c71-b761-928e73197b5e\") " Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817246 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-run\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817265 3022 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-kernel\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817282 3022 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-lib-modules\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.820364 kubelet[3022]: I1213 14:30:37.817315 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.820642 kubelet[3022]: I1213 14:30:37.817596 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.820642 kubelet[3022]: I1213 14:30:37.817630 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.822379 kubelet[3022]: I1213 14:30:37.822228 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-hostproc" (OuterVolumeSpecName: "hostproc") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.822550 kubelet[3022]: I1213 14:30:37.822423 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.824866 kubelet[3022]: I1213 14:30:37.824835 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.825224 kubelet[3022]: I1213 14:30:37.825120 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cni-path" (OuterVolumeSpecName: "cni-path") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:30:37.833341 systemd[1]: var-lib-kubelet-pods-56376088\x2d006a\x2d4c71\x2db761\x2d928e73197b5e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:37.841480 kubelet[3022]: I1213 14:30:37.840422 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56376088-006a-4c71-b761-928e73197b5e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:30:37.841480 kubelet[3022]: I1213 14:30:37.840999 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:37.840537 systemd[1]: var-lib-kubelet-pods-56376088\x2d006a\x2d4c71\x2db761\x2d928e73197b5e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:30:37.846367 kubelet[3022]: I1213 14:30:37.846323 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:30:37.846985 kubelet[3022]: I1213 14:30:37.846951 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-kube-api-access-2mjcv" (OuterVolumeSpecName: "kube-api-access-2mjcv") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "kube-api-access-2mjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:37.847406 kubelet[3022]: I1213 14:30:37.847322 3022 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "56376088-006a-4c71-b761-928e73197b5e" (UID: "56376088-006a-4c71-b761-928e73197b5e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:30:37.917671 kubelet[3022]: I1213 14:30:37.917637 3022 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-2mjcv\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-kube-api-access-2mjcv\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.917671 kubelet[3022]: I1213 14:30:37.917674 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/56376088-006a-4c71-b761-928e73197b5e-cilium-config-path\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917691 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cilium-cgroup\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917705 3022 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-clustermesh-secrets\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917720 3022 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/56376088-006a-4c71-b761-928e73197b5e-cilium-ipsec-secrets\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917809 3022 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-etc-cni-netd\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917825 3022 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/56376088-006a-4c71-b761-928e73197b5e-hubble-tls\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917872 3022 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-bpf-maps\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917889 3022 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-cni-path\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918096 kubelet[3022]: I1213 14:30:37.917903 3022 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-host-proc-sys-net\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918332 kubelet[3022]: I1213 14:30:37.917915 3022 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-xtables-lock\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:37.918332 kubelet[3022]: I1213 14:30:37.918007 3022 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/56376088-006a-4c71-b761-928e73197b5e-hostproc\") on node \"ip-172-31-23-203\" DevicePath \"\"" Dec 13 14:30:38.312626 systemd[1]: var-lib-kubelet-pods-56376088\x2d006a\x2d4c71\x2db761\x2d928e73197b5e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2mjcv.mount: Deactivated successfully. Dec 13 14:30:38.312826 systemd[1]: var-lib-kubelet-pods-56376088\x2d006a\x2d4c71\x2db761\x2d928e73197b5e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:30:38.604274 kubelet[3022]: I1213 14:30:38.604058 3022 scope.go:117] "RemoveContainer" containerID="29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1" Dec 13 14:30:38.605618 env[1828]: time="2024-12-13T14:30:38.605575834Z" level=info msg="RemoveContainer for \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\"" Dec 13 14:30:38.620310 env[1828]: time="2024-12-13T14:30:38.618178387Z" level=info msg="RemoveContainer for \"29911e82bc48758eed00d5e38b242343f905479090c58236804f89f47cca63f1\" returns successfully" Dec 13 14:30:38.732934 kubelet[3022]: I1213 14:30:38.732896 3022 topology_manager.go:215] "Topology Admit Handler" podUID="a1343d67-597b-4fbe-a2de-85bec1628664" podNamespace="kube-system" podName="cilium-2xhzr" Dec 13 14:30:38.733315 kubelet[3022]: E1213 14:30:38.733299 3022 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="56376088-006a-4c71-b761-928e73197b5e" containerName="mount-cgroup" Dec 13 14:30:38.733467 kubelet[3022]: I1213 14:30:38.733455 3022 memory_manager.go:354] "RemoveStaleState removing state" podUID="56376088-006a-4c71-b761-928e73197b5e" containerName="mount-cgroup" Dec 13 14:30:38.824178 kubelet[3022]: I1213 14:30:38.824139 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-cilium-run\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.824512 kubelet[3022]: I1213 14:30:38.824489 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a1343d67-597b-4fbe-a2de-85bec1628664-cilium-config-path\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.824700 kubelet[3022]: I1213 14:30:38.824690 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-cilium-cgroup\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.825038 kubelet[3022]: I1213 14:30:38.824980 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a1343d67-597b-4fbe-a2de-85bec1628664-clustermesh-secrets\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.825191 kubelet[3022]: I1213 14:30:38.825180 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-bpf-maps\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.825573 kubelet[3022]: I1213 14:30:38.825521 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-cni-path\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.826271 kubelet[3022]: I1213 14:30:38.825864 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-lib-modules\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.826546 kubelet[3022]: I1213 14:30:38.826515 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-etc-cni-netd\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.826690 kubelet[3022]: I1213 14:30:38.826681 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a1343d67-597b-4fbe-a2de-85bec1628664-hubble-tls\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.827113 kubelet[3022]: I1213 14:30:38.827098 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz7fq\" (UniqueName: \"kubernetes.io/projected/a1343d67-597b-4fbe-a2de-85bec1628664-kube-api-access-zz7fq\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.827541 kubelet[3022]: I1213 14:30:38.827420 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-xtables-lock\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.828001 kubelet[3022]: I1213 14:30:38.827682 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-host-proc-sys-net\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.828195 kubelet[3022]: I1213 14:30:38.828146 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-hostproc\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.828475 kubelet[3022]: I1213 14:30:38.828462 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a1343d67-597b-4fbe-a2de-85bec1628664-cilium-ipsec-secrets\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.828796 kubelet[3022]: I1213 14:30:38.828729 3022 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a1343d67-597b-4fbe-a2de-85bec1628664-host-proc-sys-kernel\") pod \"cilium-2xhzr\" (UID: \"a1343d67-597b-4fbe-a2de-85bec1628664\") " pod="kube-system/cilium-2xhzr" Dec 13 14:30:38.989812 kubelet[3022]: I1213 14:30:38.989780 3022 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="56376088-006a-4c71-b761-928e73197b5e" path="/var/lib/kubelet/pods/56376088-006a-4c71-b761-928e73197b5e/volumes" Dec 13 14:30:39.040485 env[1828]: time="2024-12-13T14:30:39.040417126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xhzr,Uid:a1343d67-597b-4fbe-a2de-85bec1628664,Namespace:kube-system,Attempt:0,}" Dec 13 14:30:39.070127 env[1828]: time="2024-12-13T14:30:39.070048078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:30:39.070314 env[1828]: time="2024-12-13T14:30:39.070141187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:30:39.070314 env[1828]: time="2024-12-13T14:30:39.070172006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:30:39.070425 env[1828]: time="2024-12-13T14:30:39.070346250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5 pid=4963 runtime=io.containerd.runc.v2 Dec 13 14:30:39.080614 kubelet[3022]: I1213 14:30:39.080556 3022 setters.go:568] "Node became not ready" node="ip-172-31-23-203" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:30:39Z","lastTransitionTime":"2024-12-13T14:30:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:30:39.164468 env[1828]: time="2024-12-13T14:30:39.164425957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2xhzr,Uid:a1343d67-597b-4fbe-a2de-85bec1628664,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\"" Dec 13 14:30:39.169272 env[1828]: time="2024-12-13T14:30:39.169154055Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:30:39.197665 env[1828]: time="2024-12-13T14:30:39.197620842Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d\"" Dec 13 14:30:39.198593 env[1828]: time="2024-12-13T14:30:39.198560150Z" level=info msg="StartContainer for \"e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d\"" Dec 13 14:30:39.285311 env[1828]: time="2024-12-13T14:30:39.285193151Z" level=info msg="StartContainer for \"e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d\" returns successfully" Dec 13 14:30:39.336538 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d-rootfs.mount: Deactivated successfully. Dec 13 14:30:39.361516 env[1828]: time="2024-12-13T14:30:39.361444947Z" level=info msg="shim disconnected" id=e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d Dec 13 14:30:39.361516 env[1828]: time="2024-12-13T14:30:39.361499117Z" level=warning msg="cleaning up after shim disconnected" id=e530ea42c21a6127fb92ba07b5c95a18dec004ffe4a63bdb992b3ab2ab55343d namespace=k8s.io Dec 13 14:30:39.361516 env[1828]: time="2024-12-13T14:30:39.361513889Z" level=info msg="cleaning up dead shim" Dec 13 14:30:39.372162 env[1828]: time="2024-12-13T14:30:39.372117455Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5047 runtime=io.containerd.runc.v2\n" Dec 13 14:30:39.626066 env[1828]: time="2024-12-13T14:30:39.626017867Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:30:39.675242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1188339515.mount: Deactivated successfully. Dec 13 14:30:39.693144 env[1828]: time="2024-12-13T14:30:39.693094074Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1c8ca7de53c1890a535ea58b11e06b9b28e67942b82a98a255d3585b95faf4ab\"" Dec 13 14:30:39.694247 env[1828]: time="2024-12-13T14:30:39.694029131Z" level=info msg="StartContainer for \"1c8ca7de53c1890a535ea58b11e06b9b28e67942b82a98a255d3585b95faf4ab\"" Dec 13 14:30:39.792814 env[1828]: time="2024-12-13T14:30:39.792758305Z" level=info msg="StartContainer for \"1c8ca7de53c1890a535ea58b11e06b9b28e67942b82a98a255d3585b95faf4ab\" returns successfully" Dec 13 14:30:39.858341 env[1828]: time="2024-12-13T14:30:39.858284984Z" level=info msg="shim disconnected" id=1c8ca7de53c1890a535ea58b11e06b9b28e67942b82a98a255d3585b95faf4ab Dec 13 14:30:39.858741 env[1828]: time="2024-12-13T14:30:39.858354838Z" level=warning msg="cleaning up after shim disconnected" id=1c8ca7de53c1890a535ea58b11e06b9b28e67942b82a98a255d3585b95faf4ab namespace=k8s.io Dec 13 14:30:39.858741 env[1828]: time="2024-12-13T14:30:39.858367979Z" level=info msg="cleaning up dead shim" Dec 13 14:30:39.871891 env[1828]: time="2024-12-13T14:30:39.871838089Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5112 runtime=io.containerd.runc.v2\n" Dec 13 14:30:40.628505 env[1828]: time="2024-12-13T14:30:40.627821281Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:30:40.669259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2622128782.mount: Deactivated successfully. Dec 13 14:30:40.696111 env[1828]: time="2024-12-13T14:30:40.696050749Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2da47fe4c77f78ad137154750580db4acd1875c7b2347a06574b52c15ff224d2\"" Dec 13 14:30:40.697128 env[1828]: time="2024-12-13T14:30:40.697091168Z" level=info msg="StartContainer for \"2da47fe4c77f78ad137154750580db4acd1875c7b2347a06574b52c15ff224d2\"" Dec 13 14:30:40.802377 env[1828]: time="2024-12-13T14:30:40.802324469Z" level=info msg="StartContainer for \"2da47fe4c77f78ad137154750580db4acd1875c7b2347a06574b52c15ff224d2\" returns successfully" Dec 13 14:30:40.874663 env[1828]: time="2024-12-13T14:30:40.874606840Z" level=info msg="shim disconnected" id=2da47fe4c77f78ad137154750580db4acd1875c7b2347a06574b52c15ff224d2 Dec 13 14:30:40.874663 env[1828]: time="2024-12-13T14:30:40.874662544Z" level=warning msg="cleaning up after shim disconnected" id=2da47fe4c77f78ad137154750580db4acd1875c7b2347a06574b52c15ff224d2 namespace=k8s.io Dec 13 14:30:40.875119 env[1828]: time="2024-12-13T14:30:40.874674467Z" level=info msg="cleaning up dead shim" Dec 13 14:30:40.886753 env[1828]: time="2024-12-13T14:30:40.886249374Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5171 runtime=io.containerd.runc.v2\n" Dec 13 14:30:41.642863 env[1828]: time="2024-12-13T14:30:41.642750385Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:30:41.702781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787468750.mount: Deactivated successfully. Dec 13 14:30:41.721404 env[1828]: time="2024-12-13T14:30:41.721349670Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b\"" Dec 13 14:30:41.728290 env[1828]: time="2024-12-13T14:30:41.728245518Z" level=info msg="StartContainer for \"1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b\"" Dec 13 14:30:41.816876 env[1828]: time="2024-12-13T14:30:41.816822101Z" level=info msg="StartContainer for \"1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b\" returns successfully" Dec 13 14:30:41.888122 env[1828]: time="2024-12-13T14:30:41.887636901Z" level=info msg="shim disconnected" id=1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b Dec 13 14:30:41.888482 env[1828]: time="2024-12-13T14:30:41.888135275Z" level=warning msg="cleaning up after shim disconnected" id=1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b namespace=k8s.io Dec 13 14:30:41.888482 env[1828]: time="2024-12-13T14:30:41.888158242Z" level=info msg="cleaning up dead shim" Dec 13 14:30:41.901826 env[1828]: time="2024-12-13T14:30:41.901204989Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:30:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5225 runtime=io.containerd.runc.v2\n" Dec 13 14:30:42.242413 kubelet[3022]: E1213 14:30:42.242296 3022 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:30:42.317700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c0517895094b224124e007d180a6ffaf4106d865ab32fbce6f3cdf57ca55c2b-rootfs.mount: Deactivated successfully. Dec 13 14:30:42.640828 env[1828]: time="2024-12-13T14:30:42.640767620Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:30:42.703499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount518456254.mount: Deactivated successfully. Dec 13 14:30:42.710334 env[1828]: time="2024-12-13T14:30:42.710281982Z" level=info msg="CreateContainer within sandbox \"2d6b31f1c38df4ffed7cbaa6fd98297c384a18562e77cbeba21e9298909305b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994\"" Dec 13 14:30:42.712676 env[1828]: time="2024-12-13T14:30:42.712605155Z" level=info msg="StartContainer for \"e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994\"" Dec 13 14:30:42.836249 env[1828]: time="2024-12-13T14:30:42.836189841Z" level=info msg="StartContainer for \"e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994\" returns successfully" Dec 13 14:30:43.904023 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:30:45.485706 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.aKBUsW.mount: Deactivated successfully. Dec 13 14:30:47.403631 systemd-networkd[1507]: lxc_health: Link UP Dec 13 14:30:47.409513 (udev-worker)[5794]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:30:47.410820 systemd-networkd[1507]: lxc_health: Gained carrier Dec 13 14:30:47.411087 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:30:47.822706 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.MKs7pn.mount: Deactivated successfully. Dec 13 14:30:48.678020 systemd-networkd[1507]: lxc_health: Gained IPv6LL Dec 13 14:30:49.081814 kubelet[3022]: I1213 14:30:49.081633 3022 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-2xhzr" podStartSLOduration=11.081580102 podStartE2EDuration="11.081580102s" podCreationTimestamp="2024-12-13 14:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:30:43.77320255 +0000 UTC m=+137.148718407" watchObservedRunningTime="2024-12-13 14:30:49.081580102 +0000 UTC m=+142.457095958" Dec 13 14:30:50.084597 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.WXJAmR.mount: Deactivated successfully. Dec 13 14:30:52.386549 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.kRQSkq.mount: Deactivated successfully. Dec 13 14:30:56.825767 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.5l5r5e.mount: Deactivated successfully. Dec 13 14:31:01.452699 systemd[1]: run-containerd-runc-k8s.io-e1e836b6073a4283882d7a5a9ac2cf7d9776cd05e4f3bc5c6e538116c51e2994-runc.OIEvwF.mount: Deactivated successfully. Dec 13 14:31:01.615746 sshd[4866]: pam_unix(sshd:session): session closed for user core Dec 13 14:31:01.632694 systemd[1]: sshd@27-172.31.23.203:22-139.178.89.65:46032.service: Deactivated successfully. Dec 13 14:31:01.640899 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:31:01.648327 systemd-logind[1813]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:31:01.656080 systemd-logind[1813]: Removed session 28. Dec 13 14:31:26.948694 env[1828]: time="2024-12-13T14:31:26.948651645Z" level=info msg="StopPodSandbox for \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\"" Dec 13 14:31:26.949387 env[1828]: time="2024-12-13T14:31:26.948772707Z" level=info msg="TearDown network for sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" successfully" Dec 13 14:31:26.949387 env[1828]: time="2024-12-13T14:31:26.948818242Z" level=info msg="StopPodSandbox for \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" returns successfully" Dec 13 14:31:26.949387 env[1828]: time="2024-12-13T14:31:26.949262172Z" level=info msg="RemovePodSandbox for \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\"" Dec 13 14:31:26.949387 env[1828]: time="2024-12-13T14:31:26.949298897Z" level=info msg="Forcibly stopping sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\"" Dec 13 14:31:26.949571 env[1828]: time="2024-12-13T14:31:26.949390220Z" level=info msg="TearDown network for sandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" successfully" Dec 13 14:31:26.960234 env[1828]: time="2024-12-13T14:31:26.960174114Z" level=info msg="RemovePodSandbox \"af94b16621806d991f822f80b6e546d55f2a53f97054babf069f4a7319a1f2d4\" returns successfully" Dec 13 14:31:26.961099 env[1828]: time="2024-12-13T14:31:26.961040912Z" level=info msg="StopPodSandbox for \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\"" Dec 13 14:31:26.961230 env[1828]: time="2024-12-13T14:31:26.961159750Z" level=info msg="TearDown network for sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" successfully" Dec 13 14:31:26.961230 env[1828]: time="2024-12-13T14:31:26.961208142Z" level=info msg="StopPodSandbox for \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" returns successfully" Dec 13 14:31:26.961737 env[1828]: time="2024-12-13T14:31:26.961707947Z" level=info msg="RemovePodSandbox for \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\"" Dec 13 14:31:26.961838 env[1828]: time="2024-12-13T14:31:26.961738814Z" level=info msg="Forcibly stopping sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\"" Dec 13 14:31:26.961838 env[1828]: time="2024-12-13T14:31:26.961825341Z" level=info msg="TearDown network for sandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" successfully" Dec 13 14:31:26.969070 env[1828]: time="2024-12-13T14:31:26.969029427Z" level=info msg="RemovePodSandbox \"e8c8838c3813b01ca6e89f0326bf0dea492561e14857c28ac4750091aed8c635\" returns successfully" Dec 13 14:31:26.969628 env[1828]: time="2024-12-13T14:31:26.969578690Z" level=info msg="StopPodSandbox for \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\"" Dec 13 14:31:26.969829 env[1828]: time="2024-12-13T14:31:26.969778635Z" level=info msg="TearDown network for sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" successfully" Dec 13 14:31:26.969900 env[1828]: time="2024-12-13T14:31:26.969823837Z" level=info msg="StopPodSandbox for \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" returns successfully" Dec 13 14:31:26.970223 env[1828]: time="2024-12-13T14:31:26.970193875Z" level=info msg="RemovePodSandbox for \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\"" Dec 13 14:31:26.970340 env[1828]: time="2024-12-13T14:31:26.970221885Z" level=info msg="Forcibly stopping sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\"" Dec 13 14:31:26.970340 env[1828]: time="2024-12-13T14:31:26.970307832Z" level=info msg="TearDown network for sandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" successfully" Dec 13 14:31:26.976680 env[1828]: time="2024-12-13T14:31:26.976630425Z" level=info msg="RemovePodSandbox \"607787f0808839998c26333577d700958032f99d5d08088828a62a6587b24f3c\" returns successfully" Dec 13 14:31:28.452873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c-rootfs.mount: Deactivated successfully. Dec 13 14:31:28.490756 env[1828]: time="2024-12-13T14:31:28.490674640Z" level=info msg="shim disconnected" id=b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c Dec 13 14:31:28.491673 env[1828]: time="2024-12-13T14:31:28.491635061Z" level=warning msg="cleaning up after shim disconnected" id=b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c namespace=k8s.io Dec 13 14:31:28.491673 env[1828]: time="2024-12-13T14:31:28.491660594Z" level=info msg="cleaning up dead shim" Dec 13 14:31:28.502748 env[1828]: time="2024-12-13T14:31:28.502701085Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5991 runtime=io.containerd.runc.v2\n" Dec 13 14:31:28.784089 kubelet[3022]: I1213 14:31:28.783709 3022 scope.go:117] "RemoveContainer" containerID="b27ebe01b52d5acab91953453c5c158022ffb31e8798e696313d3402d1aba78c" Dec 13 14:31:28.789196 env[1828]: time="2024-12-13T14:31:28.789137453Z" level=info msg="CreateContainer within sandbox \"7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:31:28.828216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210361173.mount: Deactivated successfully. Dec 13 14:31:28.837023 env[1828]: time="2024-12-13T14:31:28.836956238Z" level=info msg="CreateContainer within sandbox \"7368b288b2e743ddf3dcdcc53c73c2d38c4ad3ded603b93436b169d05aee8524\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"8cf50f3fd582a05bd395f16057967c1be1ad915d6e2e689be7d629f9b5fa7b60\"" Dec 13 14:31:28.837705 env[1828]: time="2024-12-13T14:31:28.837667185Z" level=info msg="StartContainer for \"8cf50f3fd582a05bd395f16057967c1be1ad915d6e2e689be7d629f9b5fa7b60\"" Dec 13 14:31:29.003982 env[1828]: time="2024-12-13T14:31:29.003923527Z" level=info msg="StartContainer for \"8cf50f3fd582a05bd395f16057967c1be1ad915d6e2e689be7d629f9b5fa7b60\" returns successfully" Dec 13 14:31:30.552562 kubelet[3022]: E1213 14:31:30.552521 3022 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": context deadline exceeded" Dec 13 14:31:33.443775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c-rootfs.mount: Deactivated successfully. Dec 13 14:31:33.481445 env[1828]: time="2024-12-13T14:31:33.481366235Z" level=info msg="shim disconnected" id=17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c Dec 13 14:31:33.482279 env[1828]: time="2024-12-13T14:31:33.481446228Z" level=warning msg="cleaning up after shim disconnected" id=17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c namespace=k8s.io Dec 13 14:31:33.482279 env[1828]: time="2024-12-13T14:31:33.481641851Z" level=info msg="cleaning up dead shim" Dec 13 14:31:33.499249 env[1828]: time="2024-12-13T14:31:33.499204465Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:31:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6051 runtime=io.containerd.runc.v2\n" Dec 13 14:31:33.803096 kubelet[3022]: I1213 14:31:33.802562 3022 scope.go:117] "RemoveContainer" containerID="17662b5bbd68639a3d8817be41b088ee8862d523950e7ce934efea46e2289b9c" Dec 13 14:31:33.805942 env[1828]: time="2024-12-13T14:31:33.805902086Z" level=info msg="CreateContainer within sandbox \"5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:31:33.840092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120194163.mount: Deactivated successfully. Dec 13 14:31:33.857578 env[1828]: time="2024-12-13T14:31:33.857521954Z" level=info msg="CreateContainer within sandbox \"5974d41a56660270046da292aa1847629a3370477bc73c3cf352801a322d2285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2d12fcfa8ba8ae8aba1d3b1304f5ad6211e28ab783139c6d80153f253f260405\"" Dec 13 14:31:33.858262 env[1828]: time="2024-12-13T14:31:33.858213806Z" level=info msg="StartContainer for \"2d12fcfa8ba8ae8aba1d3b1304f5ad6211e28ab783139c6d80153f253f260405\"" Dec 13 14:31:33.950830 env[1828]: time="2024-12-13T14:31:33.950771386Z" level=info msg="StartContainer for \"2d12fcfa8ba8ae8aba1d3b1304f5ad6211e28ab783139c6d80153f253f260405\" returns successfully" Dec 13 14:31:40.554575 kubelet[3022]: E1213 14:31:40.554354 3022 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-203?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"