Dec 13 14:13:36.989999 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 14:13:36.990046 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:13:36.990069 kernel: efi: EFI v2.70 by EDK II Dec 13 14:13:36.990096 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7171cf98 Dec 13 14:13:36.990112 kernel: ACPI: Early table checksum verification disabled Dec 13 14:13:36.990126 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 14:13:36.990142 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 14:13:36.990156 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:13:36.990170 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 14:13:36.990183 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:13:36.990202 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 14:13:36.990216 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 14:13:36.990229 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 14:13:36.990244 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:13:36.990260 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 14:13:36.990279 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 14:13:36.990294 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 14:13:36.990308 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 14:13:36.990322 kernel: printk: bootconsole [uart0] enabled Dec 13 14:13:36.990336 kernel: NUMA: Failed to initialise from firmware Dec 13 14:13:36.990351 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:36.990366 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Dec 13 14:13:36.990425 kernel: Zone ranges: Dec 13 14:13:36.990443 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 14:13:36.990457 kernel: DMA32 empty Dec 13 14:13:36.990472 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 14:13:36.990492 kernel: Movable zone start for each node Dec 13 14:13:36.990506 kernel: Early memory node ranges Dec 13 14:13:36.990521 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 14:13:36.990536 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 14:13:36.990550 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 14:13:36.990565 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 14:13:36.990579 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 14:13:36.990593 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 14:13:36.990607 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 14:13:36.990622 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 14:13:36.990636 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 14:13:36.990650 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 14:13:36.990669 kernel: psci: probing for conduit method from ACPI. Dec 13 14:13:36.990683 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 14:13:36.990704 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:13:36.990720 kernel: psci: Trusted OS migration not required Dec 13 14:13:36.990735 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:13:36.990754 kernel: ACPI: SRAT not present Dec 13 14:13:36.990770 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:13:36.990785 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:13:36.990801 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 14:13:36.990816 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:13:36.990831 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:13:36.990846 kernel: CPU features: detected: Spectre-v2 Dec 13 14:13:36.990861 kernel: CPU features: detected: Spectre-v3a Dec 13 14:13:36.990876 kernel: CPU features: detected: Spectre-BHB Dec 13 14:13:36.990891 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:13:36.990906 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:13:36.990925 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 14:13:36.990940 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 14:13:36.990955 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 14:13:36.990970 kernel: Policy zone: Normal Dec 13 14:13:36.990988 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:36.991004 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:13:36.991019 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:13:36.991035 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:13:36.991050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:13:36.991066 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 14:13:36.991085 kernel: Memory: 3824524K/4030464K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 205940K reserved, 0K cma-reserved) Dec 13 14:13:36.991101 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:13:36.991116 kernel: trace event string verifier disabled Dec 13 14:13:36.991131 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:13:36.991148 kernel: rcu: RCU event tracing is enabled. Dec 13 14:13:36.991163 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:13:36.991179 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:13:36.991195 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:13:36.991210 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:13:36.991225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:13:36.991240 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:13:36.991255 kernel: GICv3: 96 SPIs implemented Dec 13 14:13:36.991274 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:13:36.991289 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:13:36.991304 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:13:36.991319 kernel: GICv3: 16 PPIs implemented Dec 13 14:13:36.991334 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 14:13:36.991349 kernel: ACPI: SRAT not present Dec 13 14:13:36.991364 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 14:13:36.991422 kernel: ITS@0x0000000010080000: allocated 8192 Devices @400090000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:13:36.991443 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000a0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:13:36.991459 kernel: GICv3: using LPI property table @0x00000004000b0000 Dec 13 14:13:36.991473 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 14:13:36.991494 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Dec 13 14:13:36.991510 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 14:13:36.991525 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 14:13:36.991540 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 14:13:36.991556 kernel: Console: colour dummy device 80x25 Dec 13 14:13:36.991572 kernel: printk: console [tty1] enabled Dec 13 14:13:36.991587 kernel: ACPI: Core revision 20210730 Dec 13 14:13:36.991603 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 14:13:36.991618 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:13:36.991634 kernel: LSM: Security Framework initializing Dec 13 14:13:36.991653 kernel: SELinux: Initializing. Dec 13 14:13:36.991669 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:36.991685 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:13:36.991700 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:13:36.991715 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 14:13:36.991730 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 14:13:36.991746 kernel: Remapping and enabling EFI services. Dec 13 14:13:36.991761 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:13:36.991776 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:13:36.991792 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 14:13:36.991811 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Dec 13 14:13:36.991827 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 14:13:36.991842 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:13:36.991858 kernel: SMP: Total of 2 processors activated. Dec 13 14:13:36.991873 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:13:36.991888 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 14:13:36.991903 kernel: CPU features: detected: CRC32 instructions Dec 13 14:13:36.991918 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:13:36.991934 kernel: alternatives: patching kernel code Dec 13 14:13:36.991953 kernel: devtmpfs: initialized Dec 13 14:13:36.991969 kernel: KASLR disabled due to lack of seed Dec 13 14:13:36.991995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:13:36.992015 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:13:36.992031 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:13:36.992046 kernel: SMBIOS 3.0.0 present. Dec 13 14:13:36.992063 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 14:13:36.992079 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:13:36.992095 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:13:36.992111 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:13:36.992127 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:13:36.992147 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:13:36.992163 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Dec 13 14:13:36.992179 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:13:36.992195 kernel: cpuidle: using governor menu Dec 13 14:13:36.992211 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:13:36.992231 kernel: ASID allocator initialised with 32768 entries Dec 13 14:13:36.992247 kernel: ACPI: bus type PCI registered Dec 13 14:13:36.992263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:13:36.992279 kernel: Serial: AMBA PL011 UART driver Dec 13 14:13:36.992295 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:13:36.992311 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:13:36.992327 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:13:36.992343 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:13:36.992359 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:13:36.993331 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:13:36.993359 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:13:36.993391 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:13:37.003670 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:13:37.003689 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:13:37.003706 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:13:37.003722 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:13:37.003739 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:13:37.003755 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:13:37.003780 kernel: ACPI: Interpreter enabled Dec 13 14:13:37.003797 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:13:37.003813 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:13:37.003829 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 14:13:37.004113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:13:37.004308 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:13:37.004523 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:13:37.004714 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 14:13:37.004905 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 14:13:37.004928 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 14:13:37.004945 kernel: acpiphp: Slot [1] registered Dec 13 14:13:37.004961 kernel: acpiphp: Slot [2] registered Dec 13 14:13:37.004977 kernel: acpiphp: Slot [3] registered Dec 13 14:13:37.004993 kernel: acpiphp: Slot [4] registered Dec 13 14:13:37.005009 kernel: acpiphp: Slot [5] registered Dec 13 14:13:37.005025 kernel: acpiphp: Slot [6] registered Dec 13 14:13:37.005054 kernel: acpiphp: Slot [7] registered Dec 13 14:13:37.005080 kernel: acpiphp: Slot [8] registered Dec 13 14:13:37.005096 kernel: acpiphp: Slot [9] registered Dec 13 14:13:37.005112 kernel: acpiphp: Slot [10] registered Dec 13 14:13:37.005128 kernel: acpiphp: Slot [11] registered Dec 13 14:13:37.005144 kernel: acpiphp: Slot [12] registered Dec 13 14:13:37.005160 kernel: acpiphp: Slot [13] registered Dec 13 14:13:37.005176 kernel: acpiphp: Slot [14] registered Dec 13 14:13:37.005192 kernel: acpiphp: Slot [15] registered Dec 13 14:13:37.005208 kernel: acpiphp: Slot [16] registered Dec 13 14:13:37.005227 kernel: acpiphp: Slot [17] registered Dec 13 14:13:37.005244 kernel: acpiphp: Slot [18] registered Dec 13 14:13:37.005259 kernel: acpiphp: Slot [19] registered Dec 13 14:13:37.005275 kernel: acpiphp: Slot [20] registered Dec 13 14:13:37.005291 kernel: acpiphp: Slot [21] registered Dec 13 14:13:37.005307 kernel: acpiphp: Slot [22] registered Dec 13 14:13:37.005322 kernel: acpiphp: Slot [23] registered Dec 13 14:13:37.005338 kernel: acpiphp: Slot [24] registered Dec 13 14:13:37.005354 kernel: acpiphp: Slot [25] registered Dec 13 14:13:37.005370 kernel: acpiphp: Slot [26] registered Dec 13 14:13:37.005408 kernel: acpiphp: Slot [27] registered Dec 13 14:13:37.005424 kernel: acpiphp: Slot [28] registered Dec 13 14:13:37.005440 kernel: acpiphp: Slot [29] registered Dec 13 14:13:37.005456 kernel: acpiphp: Slot [30] registered Dec 13 14:13:37.005472 kernel: acpiphp: Slot [31] registered Dec 13 14:13:37.005488 kernel: PCI host bridge to bus 0000:00 Dec 13 14:13:37.005682 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 14:13:37.005858 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:13:37.006034 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:37.006203 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 14:13:37.006491 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 14:13:37.006712 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 14:13:37.007574 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 14:13:37.007807 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:13:37.008005 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 14:13:37.008203 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:37.008459 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:13:37.008663 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 14:13:37.008861 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 14:13:37.009075 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 14:13:37.009277 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 14:13:37.009573 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 14:13:37.009767 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 14:13:37.009958 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 14:13:37.010149 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 14:13:37.010341 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 14:13:37.010562 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 14:13:37.010736 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:13:37.010913 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 14:13:37.010936 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:13:37.010953 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:13:37.010970 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:13:37.010987 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:13:37.011003 kernel: iommu: Default domain type: Translated Dec 13 14:13:37.011020 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:13:37.011036 kernel: vgaarb: loaded Dec 13 14:13:37.011052 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:13:37.011073 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:13:37.011090 kernel: PTP clock support registered Dec 13 14:13:37.011105 kernel: Registered efivars operations Dec 13 14:13:37.011121 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:13:37.011137 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:13:37.011153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:13:37.011169 kernel: pnp: PnP ACPI init Dec 13 14:13:37.011360 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 14:13:37.011423 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:13:37.011443 kernel: NET: Registered PF_INET protocol family Dec 13 14:13:37.011459 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:13:37.011476 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:13:37.011492 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:13:37.011509 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:13:37.011525 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:13:37.011541 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:13:37.011557 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:37.011578 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:13:37.011595 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:13:37.011611 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:13:37.011627 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 14:13:37.011643 kernel: kvm [1]: HYP mode not available Dec 13 14:13:37.011659 kernel: Initialise system trusted keyrings Dec 13 14:13:37.011676 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:13:37.011692 kernel: Key type asymmetric registered Dec 13 14:13:37.011707 kernel: Asymmetric key parser 'x509' registered Dec 13 14:13:37.011727 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:13:37.011744 kernel: io scheduler mq-deadline registered Dec 13 14:13:37.011760 kernel: io scheduler kyber registered Dec 13 14:13:37.011775 kernel: io scheduler bfq registered Dec 13 14:13:37.011974 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 14:13:37.011998 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:13:37.012015 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:13:37.012031 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 14:13:37.012052 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 14:13:37.012069 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:13:37.012086 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 14:13:37.012274 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 14:13:37.012297 kernel: printk: console [ttyS0] disabled Dec 13 14:13:37.012313 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 14:13:37.012330 kernel: printk: console [ttyS0] enabled Dec 13 14:13:37.012346 kernel: printk: bootconsole [uart0] disabled Dec 13 14:13:37.012361 kernel: thunder_xcv, ver 1.0 Dec 13 14:13:37.012412 kernel: thunder_bgx, ver 1.0 Dec 13 14:13:37.012439 kernel: nicpf, ver 1.0 Dec 13 14:13:37.012455 kernel: nicvf, ver 1.0 Dec 13 14:13:37.012681 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:13:37.012867 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:13:36 UTC (1734099216) Dec 13 14:13:37.012890 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:13:37.012906 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:13:37.012922 kernel: Segment Routing with IPv6 Dec 13 14:13:37.012939 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:13:37.012960 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:13:37.012976 kernel: Key type dns_resolver registered Dec 13 14:13:37.012992 kernel: registered taskstats version 1 Dec 13 14:13:37.013008 kernel: Loading compiled-in X.509 certificates Dec 13 14:13:37.013025 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:13:37.013057 kernel: Key type .fscrypt registered Dec 13 14:13:37.013079 kernel: Key type fscrypt-provisioning registered Dec 13 14:13:37.014198 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:13:37.014222 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:13:37.014247 kernel: ima: No architecture policies found Dec 13 14:13:37.014263 kernel: clk: Disabling unused clocks Dec 13 14:13:37.014279 kernel: Freeing unused kernel memory: 36416K Dec 13 14:13:37.014295 kernel: Run /init as init process Dec 13 14:13:37.014311 kernel: with arguments: Dec 13 14:13:37.014327 kernel: /init Dec 13 14:13:37.014342 kernel: with environment: Dec 13 14:13:37.014358 kernel: HOME=/ Dec 13 14:13:37.014388 kernel: TERM=linux Dec 13 14:13:37.014455 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:13:37.014477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:37.014499 systemd[1]: Detected virtualization amazon. Dec 13 14:13:37.014517 systemd[1]: Detected architecture arm64. Dec 13 14:13:37.014535 systemd[1]: Running in initrd. Dec 13 14:13:37.014552 systemd[1]: No hostname configured, using default hostname. Dec 13 14:13:37.014570 systemd[1]: Hostname set to . Dec 13 14:13:37.014592 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:37.014610 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:13:37.014627 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:37.014644 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:37.014661 systemd[1]: Reached target paths.target. Dec 13 14:13:37.014678 systemd[1]: Reached target slices.target. Dec 13 14:13:37.014696 systemd[1]: Reached target swap.target. Dec 13 14:13:37.014713 systemd[1]: Reached target timers.target. Dec 13 14:13:37.014735 systemd[1]: Listening on iscsid.socket. Dec 13 14:13:37.014753 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:13:37.014770 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:13:37.014788 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:13:37.014806 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:13:37.014823 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:37.014841 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:37.014858 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:37.014880 systemd[1]: Reached target sockets.target. Dec 13 14:13:37.014898 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:37.014915 systemd[1]: Finished network-cleanup.service. Dec 13 14:13:37.014933 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:13:37.014951 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:37.014968 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:37.014986 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:37.015004 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:13:37.015022 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:37.015043 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:13:37.015061 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:13:37.015079 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:13:37.015096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:37.015114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:37.015132 kernel: audit: type=1130 audit(1734099216.994:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.015202 systemd-journald[310]: Journal started Dec 13 14:13:37.015302 systemd-journald[310]: Runtime Journal (/run/log/journal/ec20ca936cb4ec663c84d31a057ab6cb) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:37.024873 systemd[1]: Started systemd-journald.service. Dec 13 14:13:37.024933 kernel: audit: type=1130 audit(1734099217.015:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:36.943479 systemd-modules-load[311]: Inserted module 'overlay' Dec 13 14:13:37.034039 systemd-resolved[312]: Positive Trust Anchors: Dec 13 14:13:37.034065 systemd-resolved[312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:37.034126 systemd-resolved[312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:37.065959 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:13:37.088254 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:13:37.088292 kernel: audit: type=1130 audit(1734099217.072:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.075290 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:13:37.102586 systemd-modules-load[311]: Inserted module 'br_netfilter' Dec 13 14:13:37.106251 kernel: Bridge firewalling registered Dec 13 14:13:37.111217 dracut-cmdline[328]: dracut-dracut-053 Dec 13 14:13:37.118090 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:13:37.151410 kernel: SCSI subsystem initialized Dec 13 14:13:37.179794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:13:37.179876 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:13:37.184511 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:13:37.196366 systemd-modules-load[311]: Inserted module 'dm_multipath' Dec 13 14:13:37.198516 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:37.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.206719 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:37.222417 kernel: audit: type=1130 audit(1734099217.204:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.234006 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:37.245816 kernel: audit: type=1130 audit(1734099217.236:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.287412 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:13:37.308418 kernel: iscsi: registered transport (tcp) Dec 13 14:13:37.334999 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:13:37.335068 kernel: QLogic iSCSI HBA Driver Dec 13 14:13:37.536052 systemd-resolved[312]: Defaulting to hostname 'linux'. Dec 13 14:13:37.537833 kernel: random: crng init done Dec 13 14:13:37.539229 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:37.550681 kernel: audit: type=1130 audit(1734099217.539:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.541064 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:37.566708 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:13:37.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.576165 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:13:37.580407 kernel: audit: type=1130 audit(1734099217.565:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:37.640414 kernel: raid6: neonx8 gen() 6401 MB/s Dec 13 14:13:37.658409 kernel: raid6: neonx8 xor() 4749 MB/s Dec 13 14:13:37.676408 kernel: raid6: neonx4 gen() 6512 MB/s Dec 13 14:13:37.694408 kernel: raid6: neonx4 xor() 4929 MB/s Dec 13 14:13:37.712408 kernel: raid6: neonx2 gen() 5797 MB/s Dec 13 14:13:37.730408 kernel: raid6: neonx2 xor() 4519 MB/s Dec 13 14:13:37.748407 kernel: raid6: neonx1 gen() 4464 MB/s Dec 13 14:13:37.766408 kernel: raid6: neonx1 xor() 3684 MB/s Dec 13 14:13:37.784407 kernel: raid6: int64x8 gen() 3414 MB/s Dec 13 14:13:37.802408 kernel: raid6: int64x8 xor() 2088 MB/s Dec 13 14:13:37.820408 kernel: raid6: int64x4 gen() 3786 MB/s Dec 13 14:13:37.838408 kernel: raid6: int64x4 xor() 2191 MB/s Dec 13 14:13:37.856407 kernel: raid6: int64x2 gen() 3593 MB/s Dec 13 14:13:37.874408 kernel: raid6: int64x2 xor() 1946 MB/s Dec 13 14:13:37.892408 kernel: raid6: int64x1 gen() 2758 MB/s Dec 13 14:13:37.911498 kernel: raid6: int64x1 xor() 1451 MB/s Dec 13 14:13:37.911527 kernel: raid6: using algorithm neonx4 gen() 6512 MB/s Dec 13 14:13:37.911551 kernel: raid6: .... xor() 4929 MB/s, rmw enabled Dec 13 14:13:37.913092 kernel: raid6: using neon recovery algorithm Dec 13 14:13:37.932427 kernel: xor: measuring software checksum speed Dec 13 14:13:37.932486 kernel: 8regs : 9100 MB/sec Dec 13 14:13:37.934143 kernel: 32regs : 11101 MB/sec Dec 13 14:13:37.937571 kernel: arm64_neon : 8722 MB/sec Dec 13 14:13:37.937602 kernel: xor: using function: 32regs (11101 MB/sec) Dec 13 14:13:38.027435 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:13:38.044123 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:13:38.056807 kernel: audit: type=1130 audit(1734099218.044:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:38.056843 kernel: audit: type=1334 audit(1734099218.051:10): prog-id=7 op=LOAD Dec 13 14:13:38.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:38.051000 audit: BPF prog-id=7 op=LOAD Dec 13 14:13:38.051000 audit: BPF prog-id=8 op=LOAD Dec 13 14:13:38.057519 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:38.085340 systemd-udevd[509]: Using default interface naming scheme 'v252'. Dec 13 14:13:38.095613 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:38.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:38.101151 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:13:38.129467 dracut-pre-trigger[517]: rd.md=0: removing MD RAID activation Dec 13 14:13:38.189704 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:13:38.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:38.192688 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:38.294738 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:38.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:38.415547 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:13:38.415623 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 14:13:38.431930 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:13:38.432151 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:13:38.432352 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 14:13:38.432395 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:bd:25:7b:e1:45 Dec 13 14:13:38.432603 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:13:38.436989 (udev-worker)[562]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:38.444688 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:13:38.452102 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:13:38.452157 kernel: GPT:9289727 != 16777215 Dec 13 14:13:38.454228 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:13:38.455462 kernel: GPT:9289727 != 16777215 Dec 13 14:13:38.457153 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:13:38.458596 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:38.537411 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (572) Dec 13 14:13:38.550159 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:13:38.599603 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:38.647128 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:13:38.659297 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:13:38.664542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:13:38.681288 systemd[1]: Starting disk-uuid.service... Dec 13 14:13:38.693323 disk-uuid[669]: Primary Header is updated. Dec 13 14:13:38.693323 disk-uuid[669]: Secondary Entries is updated. Dec 13 14:13:38.693323 disk-uuid[669]: Secondary Header is updated. Dec 13 14:13:38.702403 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:38.712422 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:39.720574 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:13:39.720831 disk-uuid[670]: The operation has completed successfully. Dec 13 14:13:39.881201 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:13:39.883345 systemd[1]: Finished disk-uuid.service. Dec 13 14:13:39.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:39.897491 systemd[1]: Starting verity-setup.service... Dec 13 14:13:39.936084 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:13:40.035910 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:13:40.040762 systemd[1]: Finished verity-setup.service. Dec 13 14:13:40.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.044956 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:13:40.131429 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:13:40.132095 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:13:40.133840 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:13:40.137527 systemd[1]: Starting ignition-setup.service... Dec 13 14:13:40.156697 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:13:40.168077 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:40.168113 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:40.168144 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:40.179411 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:40.199768 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:13:40.225142 systemd[1]: Finished ignition-setup.service. Dec 13 14:13:40.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.229334 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:13:40.301855 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:13:40.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.304000 audit: BPF prog-id=9 op=LOAD Dec 13 14:13:40.306583 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:40.352866 systemd-networkd[1109]: lo: Link UP Dec 13 14:13:40.352888 systemd-networkd[1109]: lo: Gained carrier Dec 13 14:13:40.356713 systemd-networkd[1109]: Enumeration completed Dec 13 14:13:40.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.356890 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:40.358673 systemd[1]: Reached target network.target. Dec 13 14:13:40.360504 systemd-networkd[1109]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:40.361897 systemd[1]: Starting iscsiuio.service... Dec 13 14:13:40.377766 systemd[1]: Started iscsiuio.service. Dec 13 14:13:40.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.381134 systemd[1]: Starting iscsid.service... Dec 13 14:13:40.384498 systemd-networkd[1109]: eth0: Link UP Dec 13 14:13:40.384663 systemd-networkd[1109]: eth0: Gained carrier Dec 13 14:13:40.393517 iscsid[1114]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:40.393517 iscsid[1114]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:13:40.393517 iscsid[1114]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:13:40.393517 iscsid[1114]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:13:40.393517 iscsid[1114]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:13:40.393517 iscsid[1114]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:13:40.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:40.392572 systemd[1]: Started iscsid.service. Dec 13 14:13:40.398779 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:13:40.423211 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:13:40.427311 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:13:40.428649 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:40.432532 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:40.450694 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:13:40.458344 systemd-networkd[1109]: eth0: DHCPv4 address 172.31.21.141/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:40.479790 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:13:40.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.277630 ignition[1041]: Ignition 2.14.0 Dec 13 14:13:41.279136 ignition[1041]: Stage: fetch-offline Dec 13 14:13:41.279773 ignition[1041]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.279832 ignition[1041]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.296312 ignition[1041]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.298553 ignition[1041]: Ignition finished successfully Dec 13 14:13:41.301735 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:13:41.313661 kernel: kauditd_printk_skb: 15 callbacks suppressed Dec 13 14:13:41.313701 kernel: audit: type=1130 audit(1734099221.302:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.305645 systemd[1]: Starting ignition-fetch.service... Dec 13 14:13:41.322567 ignition[1133]: Ignition 2.14.0 Dec 13 14:13:41.323031 ignition[1133]: Stage: fetch Dec 13 14:13:41.323327 ignition[1133]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.323437 ignition[1133]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.336804 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.338922 ignition[1133]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.349495 ignition[1133]: INFO : PUT result: OK Dec 13 14:13:41.352659 ignition[1133]: DEBUG : parsed url from cmdline: "" Dec 13 14:13:41.352659 ignition[1133]: INFO : no config URL provided Dec 13 14:13:41.352659 ignition[1133]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:13:41.358181 ignition[1133]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:13:41.358181 ignition[1133]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.358181 ignition[1133]: INFO : PUT result: OK Dec 13 14:13:41.358181 ignition[1133]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:13:41.365686 ignition[1133]: INFO : GET result: OK Dec 13 14:13:41.365686 ignition[1133]: DEBUG : parsing config with SHA512: 2acaa5eef07124189e7195eb2fb7fb4a8fa2443777ee7e654b74726c3cc5a9d31ce9f948054a3c94a389348d96427f98054326765f35d688aa3c1ec029fc2915 Dec 13 14:13:41.377158 unknown[1133]: fetched base config from "system" Dec 13 14:13:41.377192 unknown[1133]: fetched base config from "system" Dec 13 14:13:41.377217 unknown[1133]: fetched user config from "aws" Dec 13 14:13:41.384478 ignition[1133]: fetch: fetch complete Dec 13 14:13:41.384505 ignition[1133]: fetch: fetch passed Dec 13 14:13:41.384601 ignition[1133]: Ignition finished successfully Dec 13 14:13:41.390604 systemd[1]: Finished ignition-fetch.service. Dec 13 14:13:41.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.394819 systemd[1]: Starting ignition-kargs.service... Dec 13 14:13:41.404487 kernel: audit: type=1130 audit(1734099221.392:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.417817 ignition[1139]: Ignition 2.14.0 Dec 13 14:13:41.417842 ignition[1139]: Stage: kargs Dec 13 14:13:41.418136 ignition[1139]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.418199 ignition[1139]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.431983 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.434266 ignition[1139]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.437097 ignition[1139]: INFO : PUT result: OK Dec 13 14:13:41.441955 ignition[1139]: kargs: kargs passed Dec 13 14:13:41.442062 ignition[1139]: Ignition finished successfully Dec 13 14:13:41.446030 systemd[1]: Finished ignition-kargs.service. Dec 13 14:13:41.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.450408 systemd[1]: Starting ignition-disks.service... Dec 13 14:13:41.460418 kernel: audit: type=1130 audit(1734099221.446:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.464800 ignition[1145]: Ignition 2.14.0 Dec 13 14:13:41.465915 ignition[1145]: Stage: disks Dec 13 14:13:41.467538 ignition[1145]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:41.468302 ignition[1145]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:41.479836 ignition[1145]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:41.482050 ignition[1145]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:41.484884 ignition[1145]: INFO : PUT result: OK Dec 13 14:13:41.490029 ignition[1145]: disks: disks passed Dec 13 14:13:41.490135 ignition[1145]: Ignition finished successfully Dec 13 14:13:41.494015 systemd[1]: Finished ignition-disks.service. Dec 13 14:13:41.496957 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:13:41.513210 kernel: audit: type=1130 audit(1734099221.495:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.497098 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:41.514702 systemd[1]: Reached target local-fs.target. Dec 13 14:13:41.516132 systemd[1]: Reached target sysinit.target. Dec 13 14:13:41.517530 systemd[1]: Reached target basic.target. Dec 13 14:13:41.520266 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:13:41.567199 systemd-fsck[1153]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:13:41.578524 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:13:41.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.582764 systemd[1]: Mounting sysroot.mount... Dec 13 14:13:41.591818 kernel: audit: type=1130 audit(1734099221.579:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:41.608424 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:13:41.608620 systemd[1]: Mounted sysroot.mount. Dec 13 14:13:41.613675 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:13:41.631598 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:13:41.636191 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:13:41.636276 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:13:41.636333 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:13:41.648624 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:13:41.677209 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:41.684963 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:13:41.702537 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1170) Dec 13 14:13:41.706002 initrd-setup-root[1175]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:13:41.710684 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:41.710746 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:41.712969 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:41.719405 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:41.723486 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:41.729199 initrd-setup-root[1201]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:13:41.737427 initrd-setup-root[1209]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:13:41.745067 initrd-setup-root[1217]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:13:41.995965 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:13:41.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.004716 systemd[1]: Starting ignition-mount.service... Dec 13 14:13:42.007502 kernel: audit: type=1130 audit(1734099221.997:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.011625 systemd[1]: Starting sysroot-boot.service... Dec 13 14:13:42.024293 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:42.024498 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:42.032513 systemd-networkd[1109]: eth0: Gained IPv6LL Dec 13 14:13:42.056554 ignition[1236]: INFO : Ignition 2.14.0 Dec 13 14:13:42.058619 ignition[1236]: INFO : Stage: mount Dec 13 14:13:42.060886 ignition[1236]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:42.072058 ignition[1236]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:42.060993 systemd[1]: Finished sysroot-boot.service. Dec 13 14:13:42.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.085222 ignition[1236]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:42.085222 ignition[1236]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:42.089623 kernel: audit: type=1130 audit(1734099222.072:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.089661 ignition[1236]: INFO : PUT result: OK Dec 13 14:13:42.108457 ignition[1236]: INFO : mount: mount passed Dec 13 14:13:42.109984 ignition[1236]: INFO : Ignition finished successfully Dec 13 14:13:42.112672 systemd[1]: Finished ignition-mount.service. Dec 13 14:13:42.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.116751 systemd[1]: Starting ignition-files.service... Dec 13 14:13:42.124685 kernel: audit: type=1130 audit(1734099222.114:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:42.132528 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:13:42.150418 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1246) Dec 13 14:13:42.156078 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:13:42.156120 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:13:42.156143 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:13:42.165413 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:13:42.169521 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:13:42.188442 ignition[1265]: INFO : Ignition 2.14.0 Dec 13 14:13:42.188442 ignition[1265]: INFO : Stage: files Dec 13 14:13:42.191586 ignition[1265]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:42.191586 ignition[1265]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:42.205338 ignition[1265]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:42.207636 ignition[1265]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:42.210207 ignition[1265]: INFO : PUT result: OK Dec 13 14:13:42.214723 ignition[1265]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:13:42.218748 ignition[1265]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:13:42.221308 ignition[1265]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:13:42.259540 ignition[1265]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:13:42.262465 ignition[1265]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:13:42.265576 unknown[1265]: wrote ssh authorized keys file for user: core Dec 13 14:13:42.267675 ignition[1265]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:13:42.280937 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:42.284503 ignition[1265]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:42.370314 ignition[1265]: INFO : GET result: OK Dec 13 14:13:42.526006 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:13:42.529778 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:42.529778 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:13:42.536060 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:42.536060 ignition[1265]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:42.549873 ignition[1265]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1088630431" Dec 13 14:13:42.556154 ignition[1265]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1088630431": device or resource busy Dec 13 14:13:42.556154 ignition[1265]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1088630431", trying btrfs: device or resource busy Dec 13 14:13:42.556154 ignition[1265]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1088630431" Dec 13 14:13:42.564902 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1265) Dec 13 14:13:42.564937 ignition[1265]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1088630431" Dec 13 14:13:42.584566 ignition[1265]: INFO : op(3): [started] unmounting "/mnt/oem1088630431" Dec 13 14:13:42.586659 ignition[1265]: INFO : op(3): [finished] unmounting "/mnt/oem1088630431" Dec 13 14:13:42.586659 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:13:42.591861 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:42.595038 ignition[1265]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Dec 13 14:13:43.062173 ignition[1265]: INFO : GET result: OK Dec 13 14:13:43.243774 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:13:43.247008 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:43.250397 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:13:43.253489 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:43.256754 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:13:43.259837 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:43.262963 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:13:43.266853 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:43.270385 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:13:43.270385 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:13:43.270385 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:13:43.282347 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:43.285817 ignition[1265]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:43.295129 ignition[1265]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3296885025" Dec 13 14:13:43.297843 ignition[1265]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3296885025": device or resource busy Dec 13 14:13:43.300816 ignition[1265]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3296885025", trying btrfs: device or resource busy Dec 13 14:13:43.304083 ignition[1265]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3296885025" Dec 13 14:13:43.318101 ignition[1265]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3296885025" Dec 13 14:13:43.320743 ignition[1265]: INFO : op(6): [started] unmounting "/mnt/oem3296885025" Dec 13 14:13:43.322831 ignition[1265]: INFO : op(6): [finished] unmounting "/mnt/oem3296885025" Dec 13 14:13:43.322831 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:13:43.328073 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:43.328577 systemd[1]: mnt-oem3296885025.mount: Deactivated successfully. Dec 13 14:13:43.337041 ignition[1265]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:43.349371 ignition[1265]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem898754957" Dec 13 14:13:43.352016 ignition[1265]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem898754957": device or resource busy Dec 13 14:13:43.352016 ignition[1265]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem898754957", trying btrfs: device or resource busy Dec 13 14:13:43.352016 ignition[1265]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem898754957" Dec 13 14:13:43.360441 ignition[1265]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem898754957" Dec 13 14:13:43.360441 ignition[1265]: INFO : op(9): [started] unmounting "/mnt/oem898754957" Dec 13 14:13:43.365322 ignition[1265]: INFO : op(9): [finished] unmounting "/mnt/oem898754957" Dec 13 14:13:43.367467 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:13:43.367467 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:13:43.374393 ignition[1265]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 14:13:43.868101 ignition[1265]: INFO : GET result: OK Dec 13 14:13:44.364828 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 14:13:44.364828 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:44.372404 ignition[1265]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:13:44.380247 ignition[1265]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3205383002" Dec 13 14:13:44.382833 ignition[1265]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3205383002": device or resource busy Dec 13 14:13:44.382833 ignition[1265]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3205383002", trying btrfs: device or resource busy Dec 13 14:13:44.382833 ignition[1265]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3205383002" Dec 13 14:13:44.402498 ignition[1265]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3205383002" Dec 13 14:13:44.402498 ignition[1265]: INFO : op(c): [started] unmounting "/mnt/oem3205383002" Dec 13 14:13:44.402498 ignition[1265]: INFO : op(c): [finished] unmounting "/mnt/oem3205383002" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:13:44.402498 ignition[1265]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 14:13:44.395819 systemd[1]: mnt-oem3205383002.mount: Deactivated successfully. Dec 13 14:13:44.468891 ignition[1265]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:13:44.468891 ignition[1265]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:44.468891 ignition[1265]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:13:44.468891 ignition[1265]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:44.468891 ignition[1265]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:13:44.468891 ignition[1265]: INFO : files: files passed Dec 13 14:13:44.468891 ignition[1265]: INFO : Ignition finished successfully Dec 13 14:13:44.507512 kernel: audit: type=1130 audit(1734099224.482:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.480307 systemd[1]: Finished ignition-files.service. Dec 13 14:13:44.493760 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:13:44.513540 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:13:44.518187 systemd[1]: Starting ignition-quench.service... Dec 13 14:13:44.524507 initrd-setup-root-after-ignition[1290]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:13:44.524562 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:13:44.537754 kernel: audit: type=1130 audit(1734099224.527:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.524751 systemd[1]: Finished ignition-quench.service. Dec 13 14:13:44.540507 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:13:44.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.540947 systemd[1]: Reached target ignition-complete.target. Dec 13 14:13:44.544316 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:13:44.575614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:13:44.577616 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:13:44.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.580764 systemd[1]: Reached target initrd-fs.target. Dec 13 14:13:44.583526 systemd[1]: Reached target initrd.target. Dec 13 14:13:44.586282 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:13:44.590016 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:13:44.613204 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:13:44.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.617182 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:13:44.638149 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:13:44.641482 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:13:44.644909 systemd[1]: Stopped target timers.target. Dec 13 14:13:44.647783 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:13:44.649682 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:13:44.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.653037 systemd[1]: Stopped target initrd.target. Dec 13 14:13:44.655981 systemd[1]: Stopped target basic.target. Dec 13 14:13:44.658753 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:13:44.664109 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:13:44.667313 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:13:44.670496 systemd[1]: Stopped target remote-fs.target. Dec 13 14:13:44.675053 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:13:44.678173 systemd[1]: Stopped target sysinit.target. Dec 13 14:13:44.681042 systemd[1]: Stopped target local-fs.target. Dec 13 14:13:44.683992 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:13:44.687105 systemd[1]: Stopped target swap.target. Dec 13 14:13:44.691873 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:13:44.693728 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:13:44.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.696755 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:13:44.699944 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:13:44.701861 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:13:44.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.704960 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:13:44.707214 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:13:44.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.710743 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:13:44.712652 systemd[1]: Stopped ignition-files.service. Dec 13 14:13:44.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.730644 iscsid[1114]: iscsid shutting down. Dec 13 14:13:44.717055 systemd[1]: Stopping ignition-mount.service... Dec 13 14:13:44.718951 systemd[1]: Stopping iscsid.service... Dec 13 14:13:44.722163 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:13:44.723549 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:13:44.723895 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:13:44.725920 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:13:44.726223 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:13:44.748319 ignition[1303]: INFO : Ignition 2.14.0 Dec 13 14:13:44.750031 ignition[1303]: INFO : Stage: umount Dec 13 14:13:44.750031 ignition[1303]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:13:44.750031 ignition[1303]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:13:44.757697 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:13:44.759416 systemd[1]: Stopped iscsid.service. Dec 13 14:13:44.774576 ignition[1303]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:13:44.774576 ignition[1303]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:13:44.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.779986 ignition[1303]: INFO : PUT result: OK Dec 13 14:13:44.785353 ignition[1303]: INFO : umount: umount passed Dec 13 14:13:44.787058 ignition[1303]: INFO : Ignition finished successfully Dec 13 14:13:44.791143 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:13:44.792965 systemd[1]: Stopped ignition-mount.service. Dec 13 14:13:44.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.797279 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:13:44.797523 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:13:44.805679 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:13:44.805806 systemd[1]: Stopped ignition-disks.service. Dec 13 14:13:44.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.810213 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:13:44.810305 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:13:44.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.813403 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:13:44.813485 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:13:44.815240 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:13:44.815317 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:13:44.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.824555 systemd[1]: Stopped target paths.target. Dec 13 14:13:44.825883 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:13:44.829429 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:13:44.831189 systemd[1]: Stopped target slices.target. Dec 13 14:13:44.832544 systemd[1]: Stopped target sockets.target. Dec 13 14:13:44.834083 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:13:44.834160 systemd[1]: Closed iscsid.socket. Dec 13 14:13:44.835882 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:13:44.835965 systemd[1]: Stopped ignition-setup.service. Dec 13 14:13:44.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.848926 systemd[1]: Stopping iscsiuio.service... Dec 13 14:13:44.854120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:13:44.856421 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:13:44.858141 systemd[1]: Stopped iscsiuio.service. Dec 13 14:13:44.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.859786 systemd[1]: Stopped target network.target. Dec 13 14:13:44.862828 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:13:44.862925 systemd[1]: Closed iscsiuio.socket. Dec 13 14:13:44.864835 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:13:44.868706 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:13:44.871932 systemd-networkd[1109]: eth0: DHCPv6 lease lost Dec 13 14:13:44.874875 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:13:44.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.875073 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:13:44.886577 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:13:44.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.903000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:13:44.887695 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:13:44.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.907000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:13:44.902778 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:13:44.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.903001 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:13:44.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.905093 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:13:44.905161 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:13:44.906693 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:13:44.906773 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:13:44.909982 systemd[1]: Stopping network-cleanup.service... Dec 13 14:13:44.913908 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:13:44.914182 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:13:44.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.917303 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:13:44.917402 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:13:44.919062 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:13:44.919208 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:13:44.922228 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:13:44.925578 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:13:44.941774 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:13:44.943980 systemd[1]: Stopped network-cleanup.service. Dec 13 14:13:44.964787 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:13:44.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.965092 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:13:44.968755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:13:44.968833 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:13:44.975872 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:13:44.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.975947 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:13:44.977502 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:13:44.977582 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:13:44.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:44.981712 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:13:44.981790 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:13:44.991421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:13:44.991503 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:13:45.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.013335 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:13:45.016851 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:13:45.017968 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:13:45.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.022537 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:13:45.024484 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:13:45.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.027628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:13:45.029638 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:13:45.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.035251 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:13:45.045890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:13:45.047904 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:13:45.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:45.051826 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:13:45.060424 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:13:45.075628 systemd[1]: Switching root. Dec 13 14:13:45.103170 systemd-journald[310]: Journal stopped Dec 13 14:13:51.907472 systemd-journald[310]: Received SIGTERM from PID 1 (systemd). Dec 13 14:13:51.907588 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:13:51.907632 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:13:51.907664 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:13:51.907700 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:13:51.907732 kernel: SELinux: policy capability open_perms=1 Dec 13 14:13:51.907767 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:13:51.907799 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:13:51.907828 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:13:51.907858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:13:51.907887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:13:51.907915 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:13:51.907944 systemd[1]: Successfully loaded SELinux policy in 122.098ms. Dec 13 14:13:51.907999 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.184ms. Dec 13 14:13:51.908034 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:13:51.908065 systemd[1]: Detected virtualization amazon. Dec 13 14:13:51.908094 systemd[1]: Detected architecture arm64. Dec 13 14:13:51.908125 systemd[1]: Detected first boot. Dec 13 14:13:51.908158 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:13:51.908189 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 14:13:51.908223 kernel: audit: type=1400 audit(1734099226.495:78): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:51.908256 kernel: audit: type=1400 audit(1734099226.496:79): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:51.908290 kernel: audit: type=1334 audit(1734099226.497:80): prog-id=10 op=LOAD Dec 13 14:13:51.908320 kernel: audit: type=1334 audit(1734099226.497:81): prog-id=10 op=UNLOAD Dec 13 14:13:51.908359 kernel: audit: type=1334 audit(1734099226.503:82): prog-id=11 op=LOAD Dec 13 14:13:51.908426 kernel: audit: type=1334 audit(1734099226.503:83): prog-id=11 op=UNLOAD Dec 13 14:13:51.908482 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:13:51.908521 kernel: audit: type=1400 audit(1734099226.782:84): avc: denied { associate } for pid=1337 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:13:51.908557 kernel: audit: type=1300 audit(1734099226.782:84): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1320 pid=1337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:51.908598 kernel: audit: type=1327 audit(1734099226.782:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:51.908702 kernel: audit: type=1400 audit(1734099226.790:85): avc: denied { associate } for pid=1337 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:13:51.909243 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:13:51.909284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:51.909318 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:51.909356 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:51.909898 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:13:51.909936 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:13:51.909978 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:13:51.910012 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:13:51.910044 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:13:51.910077 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:13:51.910112 systemd[1]: Created slice system-getty.slice. Dec 13 14:13:51.910144 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:13:51.910175 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:13:51.910207 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:13:51.910239 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:13:51.910271 systemd[1]: Created slice user.slice. Dec 13 14:13:51.910300 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:13:51.910329 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:13:51.910358 systemd[1]: Set up automount boot.automount. Dec 13 14:13:51.910432 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:13:51.910489 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:13:51.910524 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:13:51.910554 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:13:51.910584 systemd[1]: Reached target integritysetup.target. Dec 13 14:13:51.910614 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:13:51.910646 systemd[1]: Reached target remote-fs.target. Dec 13 14:13:51.910676 systemd[1]: Reached target slices.target. Dec 13 14:13:51.910707 systemd[1]: Reached target swap.target. Dec 13 14:13:51.910742 systemd[1]: Reached target torcx.target. Dec 13 14:13:51.910774 systemd[1]: Reached target veritysetup.target. Dec 13 14:13:51.910804 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:13:51.910833 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:13:51.910868 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:13:51.910900 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:13:51.910930 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:13:51.910960 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:13:51.910989 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:13:51.911022 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:13:51.911055 systemd[1]: Mounting media.mount... Dec 13 14:13:51.911095 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:13:51.911127 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:13:51.911158 systemd[1]: Mounting tmp.mount... Dec 13 14:13:51.911187 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:13:51.911217 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:51.911247 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:13:51.911279 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:13:51.911310 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:51.911344 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:51.911417 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:51.911454 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:13:51.911484 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:51.911515 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:13:51.911548 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:13:51.911578 kernel: fuse: init (API version 7.34) Dec 13 14:13:51.911610 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:13:51.911645 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 13 14:13:51.911675 kernel: audit: type=1131 audit(1734099231.698:102): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.911705 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:13:51.911735 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:13:51.911770 kernel: audit: type=1131 audit(1734099231.713:103): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.911801 systemd[1]: Stopped systemd-journald.service. Dec 13 14:13:51.911830 kernel: audit: type=1130 audit(1734099231.723:104): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.911859 systemd[1]: Starting systemd-journald.service... Dec 13 14:13:51.911893 kernel: audit: type=1131 audit(1734099231.723:105): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.911921 kernel: audit: type=1334 audit(1734099231.732:106): prog-id=18 op=LOAD Dec 13 14:13:51.911951 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:13:51.911980 kernel: audit: type=1334 audit(1734099231.732:107): prog-id=19 op=LOAD Dec 13 14:13:51.912011 kernel: audit: type=1334 audit(1734099231.732:108): prog-id=20 op=LOAD Dec 13 14:13:51.912040 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:13:51.912071 kernel: audit: type=1334 audit(1734099231.732:109): prog-id=16 op=UNLOAD Dec 13 14:13:51.912100 kernel: audit: type=1334 audit(1734099231.732:110): prog-id=17 op=UNLOAD Dec 13 14:13:51.912133 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:13:51.912163 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:13:51.912194 kernel: loop: module loaded Dec 13 14:13:51.912224 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:13:51.912256 systemd[1]: Stopped verity-setup.service. Dec 13 14:13:51.912286 kernel: audit: type=1131 audit(1734099231.792:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.912317 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:13:51.912346 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:13:51.912406 systemd[1]: Mounted media.mount. Dec 13 14:13:51.912448 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:13:51.912478 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:13:51.912508 systemd[1]: Mounted tmp.mount. Dec 13 14:13:51.912537 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:13:51.927477 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:13:51.927527 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:13:51.927560 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:51.927599 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:51.927630 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:51.927661 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:51.927695 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:51.927726 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:51.927757 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:13:51.927786 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:13:51.927816 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:51.927845 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:51.927875 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:13:51.927906 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:13:51.927936 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:13:51.927969 systemd[1]: Reached target network-pre.target. Dec 13 14:13:51.927999 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:13:51.928031 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:13:51.928064 systemd-journald[1415]: Journal started Dec 13 14:13:51.928162 systemd-journald[1415]: Runtime Journal (/run/log/journal/ec20ca936cb4ec663c84d31a057ab6cb) is 8.0M, max 75.4M, 67.4M free. Dec 13 14:13:46.303000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:13:46.495000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:46.496000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:13:46.497000 audit: BPF prog-id=10 op=LOAD Dec 13 14:13:46.497000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:13:46.503000 audit: BPF prog-id=11 op=LOAD Dec 13 14:13:46.503000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:13:46.782000 audit[1337]: AVC avc: denied { associate } for pid=1337 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:13:46.782000 audit[1337]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458ac a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1320 pid=1337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:46.782000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:46.790000 audit[1337]: AVC avc: denied { associate } for pid=1337 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:13:46.790000 audit[1337]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145985 a2=1ed a3=0 items=2 ppid=1320 pid=1337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:46.790000 audit: CWD cwd="/" Dec 13 14:13:46.790000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:46.790000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:13:46.790000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:13:51.454000 audit: BPF prog-id=12 op=LOAD Dec 13 14:13:51.454000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:13:51.455000 audit: BPF prog-id=13 op=LOAD Dec 13 14:13:51.455000 audit: BPF prog-id=14 op=LOAD Dec 13 14:13:51.455000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:13:51.455000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:13:51.458000 audit: BPF prog-id=15 op=LOAD Dec 13 14:13:51.458000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:13:51.458000 audit: BPF prog-id=16 op=LOAD Dec 13 14:13:51.458000 audit: BPF prog-id=17 op=LOAD Dec 13 14:13:51.458000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:13:51.458000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:13:51.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.471000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:13:51.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.732000 audit: BPF prog-id=18 op=LOAD Dec 13 14:13:51.732000 audit: BPF prog-id=19 op=LOAD Dec 13 14:13:51.732000 audit: BPF prog-id=20 op=LOAD Dec 13 14:13:51.732000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:13:51.732000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:13:51.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.943531 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:13:51.943588 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:13:51.943629 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:51.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.903000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:13:51.903000 audit[1415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffeb273d80 a2=4000 a3=1 items=0 ppid=1 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:51.903000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:13:51.453721 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:13:46.780772 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:51.460903 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:13:46.781463 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:46.781511 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:46.781576 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:13:46.781602 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:13:46.781659 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:13:46.781688 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:13:46.782101 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:13:46.782177 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:13:46.782214 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:13:46.783079 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:13:51.956814 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:13:51.956899 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:46.783160 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:13:46.783203 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:13:46.783243 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:13:46.783286 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:13:46.783324 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:13:50.512184 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:50.512728 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:50.512972 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:50.513463 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:13:50.513576 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:13:51.962448 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:13:50.513709 /usr/lib/systemd/system-generators/torcx-generator[1337]: time="2024-12-13T14:13:50Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:13:51.976637 systemd[1]: Started systemd-journald.service. Dec 13 14:13:51.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.973435 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:13:51.977727 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:13:51.982450 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:13:51.995448 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:13:51.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:51.997588 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:13:52.009023 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:13:52.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.013205 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:13:52.030723 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:13:52.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.034073 systemd-journald[1415]: Time spent on flushing to /var/log/journal/ec20ca936cb4ec663c84d31a057ab6cb is 52.781ms for 1158 entries. Dec 13 14:13:52.034073 systemd-journald[1415]: System Journal (/var/log/journal/ec20ca936cb4ec663c84d31a057ab6cb) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:13:52.115440 systemd-journald[1415]: Received client request to flush runtime journal. Dec 13 14:13:52.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.108480 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:13:52.112905 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:13:52.117360 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:13:52.133368 udevadm[1456]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:13:52.280162 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:13:52.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.284049 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:13:52.416054 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:13:52.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.885838 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:13:52.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.887000 audit: BPF prog-id=21 op=LOAD Dec 13 14:13:52.887000 audit: BPF prog-id=22 op=LOAD Dec 13 14:13:52.887000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:13:52.887000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:13:52.889742 systemd[1]: Starting systemd-udevd.service... Dec 13 14:13:52.928758 systemd-udevd[1459]: Using default interface naming scheme 'v252'. Dec 13 14:13:52.974743 systemd[1]: Started systemd-udevd.service. Dec 13 14:13:52.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:52.976000 audit: BPF prog-id=23 op=LOAD Dec 13 14:13:52.979307 systemd[1]: Starting systemd-networkd.service... Dec 13 14:13:52.985000 audit: BPF prog-id=24 op=LOAD Dec 13 14:13:52.986000 audit: BPF prog-id=25 op=LOAD Dec 13 14:13:52.986000 audit: BPF prog-id=26 op=LOAD Dec 13 14:13:52.988801 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:13:53.050073 (udev-worker)[1460]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:13:53.077716 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:13:53.093587 systemd[1]: Started systemd-userdbd.service. Dec 13 14:13:53.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.250869 systemd-networkd[1462]: lo: Link UP Dec 13 14:13:53.250890 systemd-networkd[1462]: lo: Gained carrier Dec 13 14:13:53.251832 systemd-networkd[1462]: Enumeration completed Dec 13 14:13:53.252016 systemd[1]: Started systemd-networkd.service. Dec 13 14:13:53.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.255880 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:13:53.258113 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:13:53.266408 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:13:53.266920 systemd-networkd[1462]: eth0: Link UP Dec 13 14:13:53.267276 systemd-networkd[1462]: eth0: Gained carrier Dec 13 14:13:53.275634 systemd-networkd[1462]: eth0: DHCPv4 address 172.31.21.141/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:13:53.323424 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1472) Dec 13 14:13:53.433597 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:13:53.436145 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:13:53.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.440028 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:13:53.521475 lvm[1578]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:53.558949 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:13:53.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.560871 systemd[1]: Reached target cryptsetup.target. Dec 13 14:13:53.564695 systemd[1]: Starting lvm2-activation.service... Dec 13 14:13:53.573117 lvm[1579]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:13:53.606075 systemd[1]: Finished lvm2-activation.service. Dec 13 14:13:53.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.607925 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:13:53.609607 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:13:53.609662 systemd[1]: Reached target local-fs.target. Dec 13 14:13:53.611242 systemd[1]: Reached target machines.target. Dec 13 14:13:53.614950 systemd[1]: Starting ldconfig.service... Dec 13 14:13:53.617038 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:53.617181 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:53.619560 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:13:53.623363 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:13:53.627875 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:13:53.633295 systemd[1]: Starting systemd-sysext.service... Dec 13 14:13:53.643880 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1581 (bootctl) Dec 13 14:13:53.646422 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:13:53.668912 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:13:53.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.685335 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:13:53.697233 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:13:53.697612 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:13:53.732425 kernel: loop0: detected capacity change from 0 to 189592 Dec 13 14:13:53.785555 systemd-fsck[1590]: fsck.fat 4.2 (2021-01-31) Dec 13 14:13:53.785555 systemd-fsck[1590]: /dev/nvme0n1p1: 236 files, 117175/258078 clusters Dec 13 14:13:53.788840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:13:53.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:53.793498 systemd[1]: Mounting boot.mount... Dec 13 14:13:53.812761 systemd[1]: Mounted boot.mount. Dec 13 14:13:53.837316 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:13:53.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.024420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:13:54.052504 kernel: loop1: detected capacity change from 0 to 189592 Dec 13 14:13:54.068118 (sd-sysext)[1607]: Using extensions 'kubernetes'. Dec 13 14:13:54.069477 (sd-sysext)[1607]: Merged extensions into '/usr'. Dec 13 14:13:54.109314 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:13:54.111136 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.115503 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:54.122601 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:54.127123 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:54.128736 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.129054 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.135358 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:13:54.137764 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:54.138076 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:54.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.140638 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:54.140920 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:54.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.143810 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:54.144095 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:54.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.146746 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:54.146920 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.149044 systemd[1]: Finished systemd-sysext.service. Dec 13 14:13:54.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.153643 systemd[1]: Starting ensure-sysext.service... Dec 13 14:13:54.157319 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:13:54.176006 systemd[1]: Reloading. Dec 13 14:13:54.200168 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:13:54.202614 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:13:54.211516 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:13:54.276245 /usr/lib/systemd/system-generators/torcx-generator[1633]: time="2024-12-13T14:13:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:13:54.276310 /usr/lib/systemd/system-generators/torcx-generator[1633]: time="2024-12-13T14:13:54Z" level=info msg="torcx already run" Dec 13 14:13:54.521997 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:13:54.522036 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:13:54.567320 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:13:54.706000 audit: BPF prog-id=27 op=LOAD Dec 13 14:13:54.707000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:13:54.707000 audit: BPF prog-id=28 op=LOAD Dec 13 14:13:54.708000 audit: BPF prog-id=29 op=LOAD Dec 13 14:13:54.708000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:13:54.708000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:13:54.715000 audit: BPF prog-id=30 op=LOAD Dec 13 14:13:54.715000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:13:54.716000 audit: BPF prog-id=31 op=LOAD Dec 13 14:13:54.716000 audit: BPF prog-id=32 op=LOAD Dec 13 14:13:54.716000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:13:54.716000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:13:54.720000 audit: BPF prog-id=33 op=LOAD Dec 13 14:13:54.720000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:13:54.720000 audit: BPF prog-id=34 op=LOAD Dec 13 14:13:54.720000 audit: BPF prog-id=35 op=LOAD Dec 13 14:13:54.721000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:13:54.721000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:13:54.734670 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:13:54.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.744967 systemd[1]: Starting audit-rules.service... Dec 13 14:13:54.749280 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:13:54.757491 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:13:54.759000 audit: BPF prog-id=36 op=LOAD Dec 13 14:13:54.762999 systemd[1]: Starting systemd-resolved.service... Dec 13 14:13:54.765000 audit: BPF prog-id=37 op=LOAD Dec 13 14:13:54.769907 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:13:54.775656 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:13:54.799000 audit[1696]: SYSTEM_BOOT pid=1696 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.786599 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.792315 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:13:54.796596 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:13:54.802900 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:54.804563 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.804849 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.812125 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:13:54.817651 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.819505 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:54.820190 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:54.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.825982 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:13:54.831588 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.835226 systemd[1]: Starting modprobe@loop.service... Dec 13 14:13:54.838015 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.838291 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.838543 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.845107 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.852157 systemd[1]: Starting modprobe@drm.service... Dec 13 14:13:54.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.853974 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.881000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.854250 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:54.854550 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:13:54.856661 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:13:54.856954 systemd[1]: Finished modprobe@loop.service. Dec 13 14:13:54.863329 systemd[1]: Finished ensure-sysext.service. Dec 13 14:13:54.869532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:13:54.869823 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:13:54.871822 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:13:54.873772 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:13:54.874038 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:13:54.876020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:13:54.878709 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:13:54.881140 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:13:54.881467 systemd[1]: Finished modprobe@drm.service. Dec 13 14:13:54.896531 systemd-networkd[1462]: eth0: Gained IPv6LL Dec 13 14:13:54.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:13:54.902888 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:13:54.973000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:13:54.973000 audit[1715]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcb9a2880 a2=420 a3=0 items=0 ppid=1690 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:13:54.973000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:13:54.975294 augenrules[1715]: No rules Dec 13 14:13:54.976950 systemd[1]: Finished audit-rules.service. Dec 13 14:13:54.990829 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:13:54.992647 systemd[1]: Reached target time-set.target. Dec 13 14:13:55.012106 systemd-resolved[1694]: Positive Trust Anchors: Dec 13 14:13:55.012739 systemd-resolved[1694]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:13:55.012917 systemd-resolved[1694]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:13:55.019715 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:13:55.020780 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:13:55.060108 systemd-resolved[1694]: Defaulting to hostname 'linux'. Dec 13 14:13:55.064124 systemd[1]: Started systemd-resolved.service. Dec 13 14:13:55.065882 systemd[1]: Reached target network.target. Dec 13 14:13:55.067425 systemd[1]: Reached target network-online.target. Dec 13 14:13:55.069002 systemd[1]: Reached target nss-lookup.target. Dec 13 14:13:55.195464 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:13:55.213541 systemd[1]: Finished ldconfig.service. Dec 13 14:13:55.217670 systemd[1]: Starting systemd-update-done.service... Dec 13 14:13:55.231195 systemd[1]: Finished systemd-update-done.service. Dec 13 14:13:55.233118 systemd-timesyncd[1695]: Contacted time server 172.245.210.108:123 (0.flatcar.pool.ntp.org). Dec 13 14:13:55.233235 systemd-timesyncd[1695]: Initial clock synchronization to Fri 2024-12-13 14:13:55.580302 UTC. Dec 13 14:13:55.233325 systemd[1]: Reached target sysinit.target. Dec 13 14:13:55.235033 systemd[1]: Started motdgen.path. Dec 13 14:13:55.236619 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:13:55.238968 systemd[1]: Started logrotate.timer. Dec 13 14:13:55.240563 systemd[1]: Started mdadm.timer. Dec 13 14:13:55.241874 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:13:55.243591 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:13:55.243655 systemd[1]: Reached target paths.target. Dec 13 14:13:55.245226 systemd[1]: Reached target timers.target. Dec 13 14:13:55.247225 systemd[1]: Listening on dbus.socket. Dec 13 14:13:55.250944 systemd[1]: Starting docker.socket... Dec 13 14:13:55.258084 systemd[1]: Listening on sshd.socket. Dec 13 14:13:55.259790 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:55.260716 systemd[1]: Listening on docker.socket. Dec 13 14:13:55.262328 systemd[1]: Reached target sockets.target. Dec 13 14:13:55.264022 systemd[1]: Reached target basic.target. Dec 13 14:13:55.265609 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:55.265677 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:13:55.267779 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:13:55.273485 systemd[1]: Starting containerd.service... Dec 13 14:13:55.276845 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:13:55.280975 systemd[1]: Starting dbus.service... Dec 13 14:13:55.284597 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:13:55.288435 systemd[1]: Starting extend-filesystems.service... Dec 13 14:13:55.290022 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:13:55.295213 systemd[1]: Starting kubelet.service... Dec 13 14:13:55.299005 systemd[1]: Starting motdgen.service... Dec 13 14:13:55.304036 systemd[1]: Started nvidia.service. Dec 13 14:13:55.307973 systemd[1]: Starting prepare-helm.service... Dec 13 14:13:55.312778 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:13:55.318075 systemd[1]: Starting sshd-keygen.service... Dec 13 14:13:55.330692 systemd[1]: Starting systemd-logind.service... Dec 13 14:13:55.332248 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:13:55.332411 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:13:55.333969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:13:55.335552 systemd[1]: Starting update-engine.service... Dec 13 14:13:55.347420 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:13:55.368054 jq[1729]: false Dec 13 14:13:55.382162 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:13:55.382615 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:13:55.388695 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:13:55.389033 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:13:55.437074 jq[1740]: true Dec 13 14:13:55.515959 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:13:55.516345 systemd[1]: Finished motdgen.service. Dec 13 14:13:55.539494 tar[1750]: linux-arm64/helm Dec 13 14:13:55.569074 jq[1754]: true Dec 13 14:13:55.594702 amazon-ssm-agent[1725]: 2024/12/13 14:13:55 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:13:55.603798 amazon-ssm-agent[1725]: Initializing new seelog logger Dec 13 14:13:55.608610 amazon-ssm-agent[1725]: New Seelog Logger Creation Complete Dec 13 14:13:55.608961 amazon-ssm-agent[1725]: 2024/12/13 14:13:55 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:55.610176 amazon-ssm-agent[1725]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:13:55.610719 amazon-ssm-agent[1725]: 2024/12/13 14:13:55 processing appconfig overrides Dec 13 14:13:55.614530 dbus-daemon[1728]: [system] SELinux support is enabled Dec 13 14:13:55.614817 systemd[1]: Started dbus.service. Dec 13 14:13:55.619527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:13:55.619585 systemd[1]: Reached target system-config.target. Dec 13 14:13:55.621326 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:13:55.621371 systemd[1]: Reached target user-config.target. Dec 13 14:13:55.631827 extend-filesystems[1730]: Found loop1 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p1 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p2 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p3 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found usr Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p4 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p6 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p7 Dec 13 14:13:55.633670 extend-filesystems[1730]: Found nvme0n1p9 Dec 13 14:13:55.633670 extend-filesystems[1730]: Checking size of /dev/nvme0n1p9 Dec 13 14:13:55.684743 dbus-daemon[1728]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1462 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:55.690986 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:13:55.733896 extend-filesystems[1730]: Resized partition /dev/nvme0n1p9 Dec 13 14:13:55.743412 extend-filesystems[1794]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:13:55.755431 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:13:55.807261 update_engine[1738]: I1213 14:13:55.806797 1738 main.cc:92] Flatcar Update Engine starting Dec 13 14:13:55.833036 update_engine[1738]: I1213 14:13:55.817027 1738 update_check_scheduler.cc:74] Next update check in 5m4s Dec 13 14:13:55.811793 systemd[1]: Started update-engine.service. Dec 13 14:13:55.816553 systemd[1]: Started locksmithd.service. Dec 13 14:13:55.843417 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:13:55.880513 extend-filesystems[1794]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:13:55.880513 extend-filesystems[1794]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:13:55.880513 extend-filesystems[1794]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:13:55.913412 extend-filesystems[1730]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:13:55.915557 bash[1798]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:55.882795 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:13:55.915867 env[1746]: time="2024-12-13T14:13:55.893075476Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:13:55.883181 systemd[1]: Finished extend-filesystems.service. Dec 13 14:13:55.897691 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:13:56.002328 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:13:56.022682 systemd-logind[1737]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:13:56.030083 systemd-logind[1737]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 14:13:56.032952 systemd-logind[1737]: New seat seat0. Dec 13 14:13:56.044910 systemd[1]: Started systemd-logind.service. Dec 13 14:13:56.077710 dbus-daemon[1728]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:13:56.077966 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:13:56.080846 dbus-daemon[1728]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1784 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:13:56.085732 systemd[1]: Starting polkit.service... Dec 13 14:13:56.129185 polkitd[1826]: Started polkitd version 121 Dec 13 14:13:56.152487 polkitd[1826]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:13:56.152625 polkitd[1826]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:13:56.162597 polkitd[1826]: Finished loading, compiling and executing 2 rules Dec 13 14:13:56.163486 dbus-daemon[1728]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:13:56.163735 systemd[1]: Started polkit.service. Dec 13 14:13:56.166719 polkitd[1826]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:13:56.187060 env[1746]: time="2024-12-13T14:13:56.186986539Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:13:56.187299 env[1746]: time="2024-12-13T14:13:56.187253324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.201468 systemd-hostnamed[1784]: Hostname set to (transient) Dec 13 14:13:56.201643 systemd-resolved[1694]: System hostname changed to 'ip-172-31-21-141'. Dec 13 14:13:56.205888 env[1746]: time="2024-12-13T14:13:56.205777688Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:56.206028 env[1746]: time="2024-12-13T14:13:56.205897952Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.206523 env[1746]: time="2024-12-13T14:13:56.206441691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:56.206619 env[1746]: time="2024-12-13T14:13:56.206518295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.206619 env[1746]: time="2024-12-13T14:13:56.206554293Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:13:56.206619 env[1746]: time="2024-12-13T14:13:56.206608822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.206930 env[1746]: time="2024-12-13T14:13:56.206888028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.207662 env[1746]: time="2024-12-13T14:13:56.207611682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:13:56.208272 env[1746]: time="2024-12-13T14:13:56.208210651Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:13:56.208366 env[1746]: time="2024-12-13T14:13:56.208269562Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:13:56.208595 env[1746]: time="2024-12-13T14:13:56.208513647Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:13:56.208595 env[1746]: time="2024-12-13T14:13:56.208542959Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:13:56.219224 env[1746]: time="2024-12-13T14:13:56.219147953Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:13:56.219374 env[1746]: time="2024-12-13T14:13:56.219229115Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:13:56.219374 env[1746]: time="2024-12-13T14:13:56.219264099Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:13:56.219374 env[1746]: time="2024-12-13T14:13:56.219348803Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.219666 env[1746]: time="2024-12-13T14:13:56.219387093Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.219666 env[1746]: time="2024-12-13T14:13:56.219525638Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.219666 env[1746]: time="2024-12-13T14:13:56.219561999Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.220158 env[1746]: time="2024-12-13T14:13:56.220096360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.220255 env[1746]: time="2024-12-13T14:13:56.220161031Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.220255 env[1746]: time="2024-12-13T14:13:56.220197166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.220255 env[1746]: time="2024-12-13T14:13:56.220232025Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.220448 env[1746]: time="2024-12-13T14:13:56.220265894Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:13:56.220564 env[1746]: time="2024-12-13T14:13:56.220518744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:13:56.220757 env[1746]: time="2024-12-13T14:13:56.220711030Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:13:56.221500 env[1746]: time="2024-12-13T14:13:56.221443323Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:13:56.221599 env[1746]: time="2024-12-13T14:13:56.221560056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.221665 env[1746]: time="2024-12-13T14:13:56.221600374Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:13:56.221757 env[1746]: time="2024-12-13T14:13:56.221719812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.221889 env[1746]: time="2024-12-13T14:13:56.221752981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.221958 env[1746]: time="2024-12-13T14:13:56.221891476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.221958 env[1746]: time="2024-12-13T14:13:56.221925170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222077 env[1746]: time="2024-12-13T14:13:56.221955659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222077 env[1746]: time="2024-12-13T14:13:56.221989152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222077 env[1746]: time="2024-12-13T14:13:56.222019466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222077 env[1746]: time="2024-12-13T14:13:56.222049692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222290 env[1746]: time="2024-12-13T14:13:56.222084337Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:13:56.222455 env[1746]: time="2024-12-13T14:13:56.222393294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222544 env[1746]: time="2024-12-13T14:13:56.222460294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222544 env[1746]: time="2024-12-13T14:13:56.222494915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.222544 env[1746]: time="2024-12-13T14:13:56.222526205Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:13:56.222693 env[1746]: time="2024-12-13T14:13:56.222560788Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:13:56.222693 env[1746]: time="2024-12-13T14:13:56.222591202Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:13:56.222693 env[1746]: time="2024-12-13T14:13:56.222631206Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:13:56.222861 env[1746]: time="2024-12-13T14:13:56.222697668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:13:56.223165 env[1746]: time="2024-12-13T14:13:56.223047255Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:13:56.224384 env[1746]: time="2024-12-13T14:13:56.223166393Z" level=info msg="Connect containerd service" Dec 13 14:13:56.224384 env[1746]: time="2024-12-13T14:13:56.223240142Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:13:56.231766 env[1746]: time="2024-12-13T14:13:56.231689289Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:13:56.232201 env[1746]: time="2024-12-13T14:13:56.232155760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:13:56.232284 env[1746]: time="2024-12-13T14:13:56.232263954Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:13:56.232481 systemd[1]: Started containerd.service. Dec 13 14:13:56.234215 env[1746]: time="2024-12-13T14:13:56.232854084Z" level=info msg="containerd successfully booted in 0.374018s" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.239685763Z" level=info msg="Start subscribing containerd event" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.239874317Z" level=info msg="Start recovering state" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.240151958Z" level=info msg="Start event monitor" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.240203921Z" level=info msg="Start snapshots syncer" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.240231680Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:13:56.240499 env[1746]: time="2024-12-13T14:13:56.240261280Z" level=info msg="Start streaming server" Dec 13 14:13:56.605895 coreos-metadata[1727]: Dec 13 14:13:56.605 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:13:56.610939 coreos-metadata[1727]: Dec 13 14:13:56.610 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:13:56.613233 coreos-metadata[1727]: Dec 13 14:13:56.613 INFO Fetch successful Dec 13 14:13:56.613370 coreos-metadata[1727]: Dec 13 14:13:56.613 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:13:56.614730 coreos-metadata[1727]: Dec 13 14:13:56.614 INFO Fetch successful Dec 13 14:13:56.617715 unknown[1727]: wrote ssh authorized keys file for user: core Dec 13 14:13:56.646114 update-ssh-keys[1904]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:13:56.647668 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:13:56.710621 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Create new startup processor Dec 13 14:13:56.717465 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:13:56.721704 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing bookkeeping folders Dec 13 14:13:56.722029 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO removing the completed state files Dec 13 14:13:56.723647 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:13:56.723819 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:13:56.723959 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing healthcheck folders for long running plugins Dec 13 14:13:56.724121 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing locations for inventory plugin Dec 13 14:13:56.724281 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing default location for custom inventory Dec 13 14:13:56.724493 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing default location for file inventory Dec 13 14:13:56.724618 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Initializing default location for role inventory Dec 13 14:13:56.724773 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Init the cloudwatchlogs publisher Dec 13 14:13:56.724914 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:13:56.725052 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:13:56.725190 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:13:56.725331 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:13:56.725483 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:13:56.725620 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:13:56.725771 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:13:56.728578 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:13:56.730740 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:13:56.730925 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:13:56.731092 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:13:56.731244 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO OS: linux, Arch: arm64 Dec 13 14:13:56.735984 amazon-ssm-agent[1725]: datastore file /var/lib/amazon/ssm/i-02a9ce597abf9abd2/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:13:56.833469 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:13:56.928266 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:13:57.022584 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:13:57.117137 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-02a9ce597abf9abd2, requestId: 3234d53b-8023-42cb-a1cc-34439e40f0c7 Dec 13 14:13:57.189211 tar[1750]: linux-arm64/LICENSE Dec 13 14:13:57.189835 tar[1750]: linux-arm64/README.md Dec 13 14:13:57.198496 systemd[1]: Finished prepare-helm.service. Dec 13 14:13:57.212565 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:13:57.307568 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:13:57.402699 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:13:57.498836 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:13:57.529553 locksmithd[1802]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:13:57.594322 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:13:57.690010 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [instanceID=i-02a9ce597abf9abd2] Starting association polling Dec 13 14:13:57.777799 systemd[1]: Started kubelet.service. Dec 13 14:13:57.785939 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:13:57.882057 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:13:57.978393 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:13:58.075003 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:13:58.171624 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:13:58.268563 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] listening reply. Dec 13 14:13:58.365724 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:13:58.462961 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [OfflineService] Starting document processing engine... Dec 13 14:13:58.519345 kubelet[1934]: E1213 14:13:58.519270 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:13:58.522898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:13:58.523219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:13:58.523695 systemd[1]: kubelet.service: Consumed 1.386s CPU time. Dec 13 14:13:58.560481 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:13:58.658589 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:13:58.756521 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:13:58.855886 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:13:58.955542 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [OfflineService] Starting message polling Dec 13 14:13:59.056027 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [OfflineService] Starting send replies to MDS Dec 13 14:13:59.154887 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:13:59.253788 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:13:59.353305 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:13:59.452819 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:13:59.553981 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:13:59.655673 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02a9ce597abf9abd2?role=subscribe&stream=input Dec 13 14:13:59.755609 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-02a9ce597abf9abd2?role=subscribe&stream=input Dec 13 14:13:59.856212 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:13:59.957393 amazon-ssm-agent[1725]: 2024-12-13 14:13:56 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:14:00.770098 sshd_keygen[1763]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:14:00.809607 systemd[1]: Finished sshd-keygen.service. Dec 13 14:14:00.814578 systemd[1]: Starting issuegen.service... Dec 13 14:14:00.825486 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:14:00.825855 systemd[1]: Finished issuegen.service. Dec 13 14:14:00.830364 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:14:00.845019 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:14:00.850141 systemd[1]: Started getty@tty1.service. Dec 13 14:14:00.854902 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:14:00.857680 systemd[1]: Reached target getty.target. Dec 13 14:14:00.859521 systemd[1]: Reached target multi-user.target. Dec 13 14:14:00.864232 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:14:00.879679 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:14:00.880068 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:14:00.882159 systemd[1]: Startup finished in 1.116s (kernel) + 9.533s (initrd) + 14.758s (userspace) = 25.408s. Dec 13 14:14:04.129442 systemd[1]: Created slice system-sshd.slice. Dec 13 14:14:04.131842 systemd[1]: Started sshd@0-172.31.21.141:22-139.178.89.65:56812.service. Dec 13 14:14:04.387586 sshd[1955]: Accepted publickey for core from 139.178.89.65 port 56812 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:04.391760 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.409523 systemd[1]: Created slice user-500.slice. Dec 13 14:14:04.412176 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:14:04.421525 systemd-logind[1737]: New session 1 of user core. Dec 13 14:14:04.435045 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:14:04.438015 systemd[1]: Starting user@500.service... Dec 13 14:14:04.445562 (systemd)[1958]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.639365 systemd[1958]: Queued start job for default target default.target. Dec 13 14:14:04.640441 systemd[1958]: Reached target paths.target. Dec 13 14:14:04.640496 systemd[1958]: Reached target sockets.target. Dec 13 14:14:04.640529 systemd[1958]: Reached target timers.target. Dec 13 14:14:04.640559 systemd[1958]: Reached target basic.target. Dec 13 14:14:04.640652 systemd[1958]: Reached target default.target. Dec 13 14:14:04.640721 systemd[1958]: Startup finished in 183ms. Dec 13 14:14:04.641554 systemd[1]: Started user@500.service. Dec 13 14:14:04.643620 systemd[1]: Started session-1.scope. Dec 13 14:14:04.794295 systemd[1]: Started sshd@1-172.31.21.141:22-139.178.89.65:56816.service. Dec 13 14:14:04.970238 sshd[1967]: Accepted publickey for core from 139.178.89.65 port 56816 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:04.972809 sshd[1967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:04.980076 systemd-logind[1737]: New session 2 of user core. Dec 13 14:14:04.982017 systemd[1]: Started session-2.scope. Dec 13 14:14:05.113811 sshd[1967]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:05.119425 systemd-logind[1737]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:14:05.121435 systemd[1]: sshd@1-172.31.21.141:22-139.178.89.65:56816.service: Deactivated successfully. Dec 13 14:14:05.122726 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:14:05.123683 systemd-logind[1737]: Removed session 2. Dec 13 14:14:05.144746 systemd[1]: Started sshd@2-172.31.21.141:22-139.178.89.65:56832.service. Dec 13 14:14:05.317164 sshd[1973]: Accepted publickey for core from 139.178.89.65 port 56832 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:05.320325 sshd[1973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:05.328970 systemd[1]: Started session-3.scope. Dec 13 14:14:05.329973 systemd-logind[1737]: New session 3 of user core. Dec 13 14:14:05.454256 sshd[1973]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:05.459279 systemd[1]: sshd@2-172.31.21.141:22-139.178.89.65:56832.service: Deactivated successfully. Dec 13 14:14:05.460594 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:14:05.461874 systemd-logind[1737]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:14:05.463772 systemd-logind[1737]: Removed session 3. Dec 13 14:14:05.482374 systemd[1]: Started sshd@3-172.31.21.141:22-139.178.89.65:56836.service. Dec 13 14:14:05.652729 sshd[1979]: Accepted publickey for core from 139.178.89.65 port 56836 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:05.655717 sshd[1979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:05.664167 systemd[1]: Started session-4.scope. Dec 13 14:14:05.664941 systemd-logind[1737]: New session 4 of user core. Dec 13 14:14:05.795635 sshd[1979]: pam_unix(sshd:session): session closed for user core Dec 13 14:14:05.800771 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:14:05.800787 systemd-logind[1737]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:14:05.802502 systemd[1]: sshd@3-172.31.21.141:22-139.178.89.65:56836.service: Deactivated successfully. Dec 13 14:14:05.804008 systemd-logind[1737]: Removed session 4. Dec 13 14:14:05.824134 systemd[1]: Started sshd@4-172.31.21.141:22-139.178.89.65:56850.service. Dec 13 14:14:05.999069 sshd[1985]: Accepted publickey for core from 139.178.89.65 port 56850 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:14:06.001964 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:14:06.010109 systemd-logind[1737]: New session 5 of user core. Dec 13 14:14:06.011046 systemd[1]: Started session-5.scope. Dec 13 14:14:06.164539 sudo[1988]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:14:06.165604 sudo[1988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:14:06.213703 systemd[1]: Starting docker.service... Dec 13 14:14:06.291831 env[1998]: time="2024-12-13T14:14:06.291678339Z" level=info msg="Starting up" Dec 13 14:14:06.295253 env[1998]: time="2024-12-13T14:14:06.295200603Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:06.295253 env[1998]: time="2024-12-13T14:14:06.295243726Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:06.295502 env[1998]: time="2024-12-13T14:14:06.295282722Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:06.295502 env[1998]: time="2024-12-13T14:14:06.295308586Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:06.299243 env[1998]: time="2024-12-13T14:14:06.299197498Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:14:06.299497 env[1998]: time="2024-12-13T14:14:06.299468932Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:14:06.299630 env[1998]: time="2024-12-13T14:14:06.299596238Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:14:06.299739 env[1998]: time="2024-12-13T14:14:06.299712062Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:14:06.310956 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3581527267-merged.mount: Deactivated successfully. Dec 13 14:14:06.933484 env[1998]: time="2024-12-13T14:14:06.933424039Z" level=info msg="Loading containers: start." Dec 13 14:14:07.210437 kernel: Initializing XFRM netlink socket Dec 13 14:14:07.253781 env[1998]: time="2024-12-13T14:14:07.253710303Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:14:07.256202 (udev-worker)[2008]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:14:07.388626 systemd-networkd[1462]: docker0: Link UP Dec 13 14:14:07.410810 env[1998]: time="2024-12-13T14:14:07.410755049Z" level=info msg="Loading containers: done." Dec 13 14:14:07.433345 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3500868487-merged.mount: Deactivated successfully. Dec 13 14:14:07.451644 env[1998]: time="2024-12-13T14:14:07.451565052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:14:07.452303 env[1998]: time="2024-12-13T14:14:07.452270614Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:14:07.452669 env[1998]: time="2024-12-13T14:14:07.452643605Z" level=info msg="Daemon has completed initialization" Dec 13 14:14:07.479237 systemd[1]: Started docker.service. Dec 13 14:14:07.490415 env[1998]: time="2024-12-13T14:14:07.490299127Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:14:08.504851 env[1746]: time="2024-12-13T14:14:08.504798515Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 14:14:08.652117 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:14:08.652483 systemd[1]: Stopped kubelet.service. Dec 13 14:14:08.652555 systemd[1]: kubelet.service: Consumed 1.386s CPU time. Dec 13 14:14:08.655111 systemd[1]: Starting kubelet.service... Dec 13 14:14:09.244147 systemd[1]: Started kubelet.service. Dec 13 14:14:09.362634 kubelet[2124]: E1213 14:14:09.362574 2124 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:09.370084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:09.370453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:09.381307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount445908753.mount: Deactivated successfully. Dec 13 14:14:11.850720 env[1746]: time="2024-12-13T14:14:11.850640604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.854368 env[1746]: time="2024-12-13T14:14:11.854307334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.858178 env[1746]: time="2024-12-13T14:14:11.858125437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.861505 env[1746]: time="2024-12-13T14:14:11.861457035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:11.863067 env[1746]: time="2024-12-13T14:14:11.862996469Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"" Dec 13 14:14:11.863981 env[1746]: time="2024-12-13T14:14:11.863933753Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 14:14:14.646182 env[1746]: time="2024-12-13T14:14:14.646122521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.649399 env[1746]: time="2024-12-13T14:14:14.649329122Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.652617 env[1746]: time="2024-12-13T14:14:14.652556179Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.656029 env[1746]: time="2024-12-13T14:14:14.655979109Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:14.657705 env[1746]: time="2024-12-13T14:14:14.657657149Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"" Dec 13 14:14:14.658523 env[1746]: time="2024-12-13T14:14:14.658479749Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 14:14:16.343570 env[1746]: time="2024-12-13T14:14:16.343488746Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:16.346656 env[1746]: time="2024-12-13T14:14:16.346596096Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:16.350084 env[1746]: time="2024-12-13T14:14:16.350021137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:16.354885 env[1746]: time="2024-12-13T14:14:16.354821792Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:16.356640 env[1746]: time="2024-12-13T14:14:16.356589889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"" Dec 13 14:14:16.357430 env[1746]: time="2024-12-13T14:14:16.357353999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:14:17.772853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount884969627.mount: Deactivated successfully. Dec 13 14:14:18.791120 env[1746]: time="2024-12-13T14:14:18.791036281Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.850872 env[1746]: time="2024-12-13T14:14:18.850803634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.892006 env[1746]: time="2024-12-13T14:14:18.891940834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.940192 env[1746]: time="2024-12-13T14:14:18.940125673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:18.942022 env[1746]: time="2024-12-13T14:14:18.941166013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 14:14:18.943057 env[1746]: time="2024-12-13T14:14:18.942991623Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:14:19.402079 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:14:19.402425 systemd[1]: Stopped kubelet.service. Dec 13 14:14:19.404969 systemd[1]: Starting kubelet.service... Dec 13 14:14:19.862747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2814223060.mount: Deactivated successfully. Dec 13 14:14:20.112656 systemd[1]: Started kubelet.service. Dec 13 14:14:20.233682 kubelet[2134]: E1213 14:14:20.233603 2134 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:20.238651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:20.238992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:21.544917 env[1746]: time="2024-12-13T14:14:21.544824668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.572628 env[1746]: time="2024-12-13T14:14:21.572556529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.626258 env[1746]: time="2024-12-13T14:14:21.624518544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.651594 env[1746]: time="2024-12-13T14:14:21.651539307Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:21.652335 env[1746]: time="2024-12-13T14:14:21.652262653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:14:21.653106 env[1746]: time="2024-12-13T14:14:21.653060332Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 14:14:21.929096 amazon-ssm-agent[1725]: 2024-12-13 14:14:21 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:14:22.773881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896664767.mount: Deactivated successfully. Dec 13 14:14:22.813651 env[1746]: time="2024-12-13T14:14:22.813574153Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.817089 env[1746]: time="2024-12-13T14:14:22.817028001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.820279 env[1746]: time="2024-12-13T14:14:22.820232345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.822980 env[1746]: time="2024-12-13T14:14:22.822920618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:22.825571 env[1746]: time="2024-12-13T14:14:22.825516089Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Dec 13 14:14:22.826598 env[1746]: time="2024-12-13T14:14:22.826553086Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 14:14:23.431641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount269233316.mount: Deactivated successfully. Dec 13 14:14:26.235909 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:14:26.497471 env[1746]: time="2024-12-13T14:14:26.497036466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.501727 env[1746]: time="2024-12-13T14:14:26.501663036Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.506352 env[1746]: time="2024-12-13T14:14:26.506302625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.509971 env[1746]: time="2024-12-13T14:14:26.509906432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:26.512021 env[1746]: time="2024-12-13T14:14:26.511946123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Dec 13 14:14:30.402063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 14:14:30.402371 systemd[1]: Stopped kubelet.service. Dec 13 14:14:30.408806 systemd[1]: Starting kubelet.service... Dec 13 14:14:30.959690 systemd[1]: Started kubelet.service. Dec 13 14:14:31.041057 kubelet[2162]: E1213 14:14:31.040997 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:14:31.045325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:14:31.045655 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:14:34.548705 systemd[1]: Stopped kubelet.service. Dec 13 14:14:34.555068 systemd[1]: Starting kubelet.service... Dec 13 14:14:34.613598 systemd[1]: Reloading. Dec 13 14:14:34.790115 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2024-12-13T14:14:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:34.803155 /usr/lib/systemd/system-generators/torcx-generator[2194]: time="2024-12-13T14:14:34Z" level=info msg="torcx already run" Dec 13 14:14:34.990464 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:34.990504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:35.029027 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:35.234044 systemd[1]: Started kubelet.service. Dec 13 14:14:35.237081 systemd[1]: Stopping kubelet.service... Dec 13 14:14:35.239198 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:35.239622 systemd[1]: Stopped kubelet.service. Dec 13 14:14:35.243163 systemd[1]: Starting kubelet.service... Dec 13 14:14:35.663178 systemd[1]: Started kubelet.service. Dec 13 14:14:35.734145 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:35.734145 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:35.734145 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:35.734810 kubelet[2257]: I1213 14:14:35.734281 2257 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:36.416629 kubelet[2257]: I1213 14:14:36.416582 2257 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:14:36.416850 kubelet[2257]: I1213 14:14:36.416828 2257 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:36.417358 kubelet[2257]: I1213 14:14:36.417334 2257 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:14:36.476796 kubelet[2257]: E1213 14:14:36.476741 2257 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:36.485279 kubelet[2257]: I1213 14:14:36.485209 2257 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:36.498481 kubelet[2257]: E1213 14:14:36.498433 2257 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:14:36.498788 kubelet[2257]: I1213 14:14:36.498766 2257 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:14:36.506194 kubelet[2257]: I1213 14:14:36.506131 2257 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:36.513554 kubelet[2257]: I1213 14:14:36.513521 2257 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:14:36.514114 kubelet[2257]: I1213 14:14:36.514070 2257 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:36.514538 kubelet[2257]: I1213 14:14:36.514220 2257 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:14:36.514772 kubelet[2257]: I1213 14:14:36.514749 2257 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:36.514899 kubelet[2257]: I1213 14:14:36.514880 2257 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:14:36.515185 kubelet[2257]: I1213 14:14:36.515166 2257 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:36.524020 kubelet[2257]: I1213 14:14:36.523966 2257 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:14:36.524020 kubelet[2257]: I1213 14:14:36.524018 2257 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:36.524245 kubelet[2257]: I1213 14:14:36.524064 2257 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:14:36.524245 kubelet[2257]: I1213 14:14:36.524085 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:36.532255 kubelet[2257]: W1213 14:14:36.532178 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-141&limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:36.532586 kubelet[2257]: E1213 14:14:36.532552 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-141&limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:36.532846 kubelet[2257]: I1213 14:14:36.532819 2257 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:36.536087 kubelet[2257]: I1213 14:14:36.536050 2257 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:36.540663 kubelet[2257]: W1213 14:14:36.540630 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:14:36.542306 kubelet[2257]: I1213 14:14:36.542265 2257 server.go:1269] "Started kubelet" Dec 13 14:14:36.557664 kubelet[2257]: E1213 14:14:36.557625 2257 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:36.558703 kubelet[2257]: I1213 14:14:36.558650 2257 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:36.558823 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:14:36.559128 kubelet[2257]: I1213 14:14:36.559083 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:36.562259 kubelet[2257]: I1213 14:14:36.562210 2257 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:14:36.566888 kubelet[2257]: I1213 14:14:36.566804 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:36.567465 kubelet[2257]: I1213 14:14:36.567436 2257 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:36.572605 kubelet[2257]: I1213 14:14:36.572543 2257 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:14:36.574230 kubelet[2257]: I1213 14:14:36.574199 2257 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:14:36.574854 kubelet[2257]: E1213 14:14:36.574822 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-141\" not found" Dec 13 14:14:36.576252 kubelet[2257]: W1213 14:14:36.576158 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:36.576413 kubelet[2257]: E1213 14:14:36.576262 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:36.576542 kubelet[2257]: E1213 14:14:36.568861 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.141:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.141:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-141.1810c21ab78fb6b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-141,UID:ip-172-31-21-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-141,},FirstTimestamp:2024-12-13 14:14:36.542228149 +0000 UTC m=+0.868518032,LastTimestamp:2024-12-13 14:14:36.542228149 +0000 UTC m=+0.868518032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-141,}" Dec 13 14:14:36.578300 kubelet[2257]: E1213 14:14:36.578216 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": dial tcp 172.31.21.141:6443: connect: connection refused" interval="200ms" Dec 13 14:14:36.581729 kubelet[2257]: I1213 14:14:36.581676 2257 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:36.581901 kubelet[2257]: I1213 14:14:36.581737 2257 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:36.581901 kubelet[2257]: I1213 14:14:36.581864 2257 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:36.583648 kubelet[2257]: I1213 14:14:36.583614 2257 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:14:36.583914 kubelet[2257]: I1213 14:14:36.583894 2257 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:14:36.600655 kubelet[2257]: W1213 14:14:36.600571 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:36.601369 kubelet[2257]: E1213 14:14:36.601300 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:36.616124 kubelet[2257]: I1213 14:14:36.615910 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:36.618264 kubelet[2257]: I1213 14:14:36.618202 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:36.618264 kubelet[2257]: I1213 14:14:36.618253 2257 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:36.618483 kubelet[2257]: I1213 14:14:36.618287 2257 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:14:36.618483 kubelet[2257]: E1213 14:14:36.618366 2257 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:36.623485 kubelet[2257]: W1213 14:14:36.623420 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:36.623673 kubelet[2257]: E1213 14:14:36.623495 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:36.624355 kubelet[2257]: I1213 14:14:36.624320 2257 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:36.624355 kubelet[2257]: I1213 14:14:36.624352 2257 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:36.624612 kubelet[2257]: I1213 14:14:36.624412 2257 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:36.654606 kubelet[2257]: I1213 14:14:36.654561 2257 policy_none.go:49] "None policy: Start" Dec 13 14:14:36.655941 kubelet[2257]: I1213 14:14:36.655908 2257 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:36.656084 kubelet[2257]: I1213 14:14:36.655955 2257 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:36.665881 systemd[1]: Created slice kubepods.slice. Dec 13 14:14:36.675038 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:14:36.678119 kubelet[2257]: E1213 14:14:36.676242 2257 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-21-141\" not found" Dec 13 14:14:36.682700 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:14:36.694038 kubelet[2257]: I1213 14:14:36.693025 2257 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:36.694038 kubelet[2257]: I1213 14:14:36.693606 2257 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:14:36.694038 kubelet[2257]: I1213 14:14:36.693741 2257 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:14:36.696630 kubelet[2257]: E1213 14:14:36.696565 2257 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-141\" not found" Dec 13 14:14:36.697001 kubelet[2257]: I1213 14:14:36.696980 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:36.734864 systemd[1]: Created slice kubepods-burstable-poddd75423d81639e3d145dc0fc8f685d39.slice. Dec 13 14:14:36.757841 systemd[1]: Created slice kubepods-burstable-pod9dec585eb85cc3ce820cf6a10f10e5c8.slice. Dec 13 14:14:36.767516 systemd[1]: Created slice kubepods-burstable-podc2142cff50cb60fe77977ffa374187e7.slice. Dec 13 14:14:36.779479 kubelet[2257]: E1213 14:14:36.779360 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": dial tcp 172.31.21.141:6443: connect: connection refused" interval="400ms" Dec 13 14:14:36.799845 kubelet[2257]: I1213 14:14:36.799779 2257 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:36.800567 kubelet[2257]: E1213 14:14:36.800508 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.141:6443/api/v1/nodes\": dial tcp 172.31.21.141:6443: connect: connection refused" node="ip-172-31-21-141" Dec 13 14:14:36.884745 kubelet[2257]: I1213 14:14:36.884663 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:36.884888 kubelet[2257]: I1213 14:14:36.884776 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:36.884888 kubelet[2257]: I1213 14:14:36.884819 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:36.885046 kubelet[2257]: I1213 14:14:36.884893 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2142cff50cb60fe77977ffa374187e7-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-141\" (UID: \"c2142cff50cb60fe77977ffa374187e7\") " pod="kube-system/kube-scheduler-ip-172-31-21-141" Dec 13 14:14:36.885046 kubelet[2257]: I1213 14:14:36.884965 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-ca-certs\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:36.885046 kubelet[2257]: I1213 14:14:36.885010 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:36.885209 kubelet[2257]: I1213 14:14:36.885070 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:36.885209 kubelet[2257]: I1213 14:14:36.885130 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:36.885209 kubelet[2257]: I1213 14:14:36.885176 2257 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:37.003767 kubelet[2257]: I1213 14:14:37.002457 2257 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:37.003767 kubelet[2257]: E1213 14:14:37.003744 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.141:6443/api/v1/nodes\": dial tcp 172.31.21.141:6443: connect: connection refused" node="ip-172-31-21-141" Dec 13 14:14:37.058582 env[1746]: time="2024-12-13T14:14:37.058497205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-141,Uid:dd75423d81639e3d145dc0fc8f685d39,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:37.065572 env[1746]: time="2024-12-13T14:14:37.065174494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-141,Uid:9dec585eb85cc3ce820cf6a10f10e5c8,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:37.072994 env[1746]: time="2024-12-13T14:14:37.072903903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-141,Uid:c2142cff50cb60fe77977ffa374187e7,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:37.180869 kubelet[2257]: E1213 14:14:37.180760 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": dial tcp 172.31.21.141:6443: connect: connection refused" interval="800ms" Dec 13 14:14:37.406464 kubelet[2257]: I1213 14:14:37.406411 2257 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:37.406952 kubelet[2257]: E1213 14:14:37.406909 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.141:6443/api/v1/nodes\": dial tcp 172.31.21.141:6443: connect: connection refused" node="ip-172-31-21-141" Dec 13 14:14:37.429653 kubelet[2257]: W1213 14:14:37.429583 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:37.429800 kubelet[2257]: E1213 14:14:37.429661 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:37.589400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2171864309.mount: Deactivated successfully. Dec 13 14:14:37.605087 env[1746]: time="2024-12-13T14:14:37.605030996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.606821 env[1746]: time="2024-12-13T14:14:37.606778397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.612575 env[1746]: time="2024-12-13T14:14:37.612527146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.614718 env[1746]: time="2024-12-13T14:14:37.614675280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.617293 env[1746]: time="2024-12-13T14:14:37.617236233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.620237 env[1746]: time="2024-12-13T14:14:37.620171081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.621688 env[1746]: time="2024-12-13T14:14:37.621626888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.622874 kubelet[2257]: W1213 14:14:37.622820 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:37.623010 kubelet[2257]: E1213 14:14:37.622894 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:37.623838 env[1746]: time="2024-12-13T14:14:37.623791973Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.630477 env[1746]: time="2024-12-13T14:14:37.630416152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.635913 env[1746]: time="2024-12-13T14:14:37.635853341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.640751 env[1746]: time="2024-12-13T14:14:37.640689719Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.654779 env[1746]: time="2024-12-13T14:14:37.654709522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:37.669627 env[1746]: time="2024-12-13T14:14:37.668437263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:37.670317 env[1746]: time="2024-12-13T14:14:37.670205051Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:37.670317 env[1746]: time="2024-12-13T14:14:37.670276288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:37.671149 env[1746]: time="2024-12-13T14:14:37.671073706Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87 pid=2296 runtime=io.containerd.runc.v2 Dec 13 14:14:37.702706 env[1746]: time="2024-12-13T14:14:37.702328174Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:37.702706 env[1746]: time="2024-12-13T14:14:37.702430394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:37.702706 env[1746]: time="2024-12-13T14:14:37.702457640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:37.705284 env[1746]: time="2024-12-13T14:14:37.704962827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6619fef1bcd458736e67d45b45027c7feb200dd0a827fce890e554c0ec84ae13 pid=2319 runtime=io.containerd.runc.v2 Dec 13 14:14:37.712065 env[1746]: time="2024-12-13T14:14:37.711699041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:37.712065 env[1746]: time="2024-12-13T14:14:37.711781198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:37.712065 env[1746]: time="2024-12-13T14:14:37.711807843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:37.721055 env[1746]: time="2024-12-13T14:14:37.720912763Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3 pid=2338 runtime=io.containerd.runc.v2 Dec 13 14:14:37.727751 systemd[1]: Started cri-containerd-79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87.scope. Dec 13 14:14:37.767525 systemd[1]: Started cri-containerd-45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3.scope. Dec 13 14:14:37.795319 systemd[1]: Started cri-containerd-6619fef1bcd458736e67d45b45027c7feb200dd0a827fce890e554c0ec84ae13.scope. Dec 13 14:14:37.895338 kubelet[2257]: W1213 14:14:37.893034 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-141&limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:37.895338 kubelet[2257]: E1213 14:14:37.893151 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-141&limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:37.922703 env[1746]: time="2024-12-13T14:14:37.921099495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-141,Uid:c2142cff50cb60fe77977ffa374187e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87\"" Dec 13 14:14:37.927343 env[1746]: time="2024-12-13T14:14:37.927277399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-141,Uid:9dec585eb85cc3ce820cf6a10f10e5c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3\"" Dec 13 14:14:37.933096 env[1746]: time="2024-12-13T14:14:37.932609089Z" level=info msg="CreateContainer within sandbox \"79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:14:37.936092 env[1746]: time="2024-12-13T14:14:37.936037820Z" level=info msg="CreateContainer within sandbox \"45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:14:37.946697 env[1746]: time="2024-12-13T14:14:37.946509038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-141,Uid:dd75423d81639e3d145dc0fc8f685d39,Namespace:kube-system,Attempt:0,} returns sandbox id \"6619fef1bcd458736e67d45b45027c7feb200dd0a827fce890e554c0ec84ae13\"" Dec 13 14:14:37.948923 kubelet[2257]: W1213 14:14:37.948834 2257 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.141:6443: connect: connection refused Dec 13 14:14:37.949124 kubelet[2257]: E1213 14:14:37.948938 2257 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.141:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:14:37.955968 env[1746]: time="2024-12-13T14:14:37.955881358Z" level=info msg="CreateContainer within sandbox \"6619fef1bcd458736e67d45b45027c7feb200dd0a827fce890e554c0ec84ae13\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:14:37.981786 kubelet[2257]: E1213 14:14:37.981699 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": dial tcp 172.31.21.141:6443: connect: connection refused" interval="1.6s" Dec 13 14:14:37.991821 env[1746]: time="2024-12-13T14:14:37.991759223Z" level=info msg="CreateContainer within sandbox \"79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b\"" Dec 13 14:14:37.993225 env[1746]: time="2024-12-13T14:14:37.993160527Z" level=info msg="StartContainer for \"07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b\"" Dec 13 14:14:37.997750 env[1746]: time="2024-12-13T14:14:37.997667518Z" level=info msg="CreateContainer within sandbox \"45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43\"" Dec 13 14:14:37.998788 env[1746]: time="2024-12-13T14:14:37.998725872Z" level=info msg="StartContainer for \"b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43\"" Dec 13 14:14:38.005106 env[1746]: time="2024-12-13T14:14:38.005027045Z" level=info msg="CreateContainer within sandbox \"6619fef1bcd458736e67d45b45027c7feb200dd0a827fce890e554c0ec84ae13\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f733a5f4abb6428b411e6092b0f74ec2f76775992832546742fe7c0c9cd03b23\"" Dec 13 14:14:38.005832 env[1746]: time="2024-12-13T14:14:38.005772248Z" level=info msg="StartContainer for \"f733a5f4abb6428b411e6092b0f74ec2f76775992832546742fe7c0c9cd03b23\"" Dec 13 14:14:38.033726 systemd[1]: Started cri-containerd-07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b.scope. Dec 13 14:14:38.055078 systemd[1]: Started cri-containerd-b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43.scope. Dec 13 14:14:38.078150 systemd[1]: Started cri-containerd-f733a5f4abb6428b411e6092b0f74ec2f76775992832546742fe7c0c9cd03b23.scope. Dec 13 14:14:38.188315 env[1746]: time="2024-12-13T14:14:38.185353609Z" level=info msg="StartContainer for \"07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b\" returns successfully" Dec 13 14:14:38.210803 kubelet[2257]: I1213 14:14:38.210294 2257 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:38.210803 kubelet[2257]: E1213 14:14:38.210785 2257 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.21.141:6443/api/v1/nodes\": dial tcp 172.31.21.141:6443: connect: connection refused" node="ip-172-31-21-141" Dec 13 14:14:38.211721 env[1746]: time="2024-12-13T14:14:38.211666333Z" level=info msg="StartContainer for \"b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43\" returns successfully" Dec 13 14:14:38.231194 env[1746]: time="2024-12-13T14:14:38.231130086Z" level=info msg="StartContainer for \"f733a5f4abb6428b411e6092b0f74ec2f76775992832546742fe7c0c9cd03b23\" returns successfully" Dec 13 14:14:39.813435 kubelet[2257]: I1213 14:14:39.813371 2257 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:41.477740 update_engine[1738]: I1213 14:14:41.477674 1738 update_attempter.cc:509] Updating boot flags... Dec 13 14:14:43.283225 kubelet[2257]: E1213 14:14:43.283143 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-141\" not found" node="ip-172-31-21-141" Dec 13 14:14:43.367716 kubelet[2257]: I1213 14:14:43.367670 2257 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-141" Dec 13 14:14:43.412399 kubelet[2257]: E1213 14:14:43.412235 2257 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-141.1810c21ab78fb6b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-141,UID:ip-172-31-21-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-141,},FirstTimestamp:2024-12-13 14:14:36.542228149 +0000 UTC m=+0.868518032,LastTimestamp:2024-12-13 14:14:36.542228149 +0000 UTC m=+0.868518032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-141,}" Dec 13 14:14:43.469102 kubelet[2257]: E1213 14:14:43.468964 2257 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-141.1810c21ab87a4e72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-141,UID:ip-172-31-21-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-21-141,},FirstTimestamp:2024-12-13 14:14:36.557602418 +0000 UTC m=+0.883892325,LastTimestamp:2024-12-13 14:14:36.557602418 +0000 UTC m=+0.883892325,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-141,}" Dec 13 14:14:43.532194 kubelet[2257]: E1213 14:14:43.532057 2257 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-141.1810c21abc5a240a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-141,UID:ip-172-31-21-141,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-21-141 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-21-141,},FirstTimestamp:2024-12-13 14:14:36.622603274 +0000 UTC m=+0.948893145,LastTimestamp:2024-12-13 14:14:36.622603274 +0000 UTC m=+0.948893145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-141,}" Dec 13 14:14:43.548603 kubelet[2257]: I1213 14:14:43.548447 2257 apiserver.go:52] "Watching apiserver" Dec 13 14:14:43.584091 kubelet[2257]: I1213 14:14:43.584027 2257 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:14:45.664528 systemd[1]: Reloading. Dec 13 14:14:45.838688 /usr/lib/systemd/system-generators/torcx-generator[2649]: time="2024-12-13T14:14:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:14:45.839414 /usr/lib/systemd/system-generators/torcx-generator[2649]: time="2024-12-13T14:14:45Z" level=info msg="torcx already run" Dec 13 14:14:46.051893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:14:46.052537 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:14:46.094654 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:14:46.353820 kubelet[2257]: I1213 14:14:46.353612 2257 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:46.354194 systemd[1]: Stopping kubelet.service... Dec 13 14:14:46.376450 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:14:46.376829 systemd[1]: Stopped kubelet.service. Dec 13 14:14:46.376906 systemd[1]: kubelet.service: Consumed 1.558s CPU time. Dec 13 14:14:46.380575 systemd[1]: Starting kubelet.service... Dec 13 14:14:46.707692 systemd[1]: Started kubelet.service. Dec 13 14:14:46.811590 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:46.811590 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:14:46.811590 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:14:46.812285 kubelet[2706]: I1213 14:14:46.811789 2706 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:14:46.836428 kubelet[2706]: I1213 14:14:46.834185 2706 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:14:46.836428 kubelet[2706]: I1213 14:14:46.834237 2706 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:14:46.836428 kubelet[2706]: I1213 14:14:46.834674 2706 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:14:46.839607 kubelet[2706]: I1213 14:14:46.839554 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:14:46.843817 kubelet[2706]: I1213 14:14:46.843709 2706 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:14:46.851200 kubelet[2706]: E1213 14:14:46.851146 2706 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:14:46.851469 kubelet[2706]: I1213 14:14:46.851441 2706 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:14:46.858666 kubelet[2706]: I1213 14:14:46.858621 2706 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:14:46.858866 kubelet[2706]: I1213 14:14:46.858838 2706 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:14:46.859154 kubelet[2706]: I1213 14:14:46.859105 2706 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:14:46.864282 kubelet[2706]: I1213 14:14:46.859155 2706 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:14:46.864282 kubelet[2706]: I1213 14:14:46.859486 2706 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:14:46.864282 kubelet[2706]: I1213 14:14:46.859510 2706 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:14:46.864282 kubelet[2706]: I1213 14:14:46.859570 2706 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:46.864282 kubelet[2706]: I1213 14:14:46.859752 2706 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:14:46.864814 kubelet[2706]: I1213 14:14:46.859775 2706 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:14:46.864814 kubelet[2706]: I1213 14:14:46.859819 2706 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:14:46.864814 kubelet[2706]: I1213 14:14:46.859840 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:14:46.864419 sudo[2720]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:14:46.865233 sudo[2720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:14:46.898426 kubelet[2706]: I1213 14:14:46.898009 2706 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:14:46.901514 kubelet[2706]: I1213 14:14:46.901461 2706 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:14:46.902215 kubelet[2706]: I1213 14:14:46.902171 2706 server.go:1269] "Started kubelet" Dec 13 14:14:46.920766 kubelet[2706]: I1213 14:14:46.920713 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:14:46.932868 kubelet[2706]: E1213 14:14:46.932830 2706 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:14:46.939674 kubelet[2706]: I1213 14:14:46.936998 2706 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:14:46.939674 kubelet[2706]: I1213 14:14:46.938744 2706 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:14:46.940936 kubelet[2706]: I1213 14:14:46.939783 2706 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:14:46.947583 kubelet[2706]: I1213 14:14:46.947520 2706 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:14:46.949056 kubelet[2706]: I1213 14:14:46.948984 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:14:46.952436 kubelet[2706]: I1213 14:14:46.951043 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:14:46.952436 kubelet[2706]: I1213 14:14:46.951091 2706 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:14:46.952436 kubelet[2706]: I1213 14:14:46.951123 2706 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:14:46.952436 kubelet[2706]: E1213 14:14:46.951199 2706 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:14:46.953002 kubelet[2706]: I1213 14:14:46.952968 2706 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:14:46.961496 kubelet[2706]: I1213 14:14:46.956713 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:14:46.962032 kubelet[2706]: I1213 14:14:46.962000 2706 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:14:46.962619 kubelet[2706]: I1213 14:14:46.962584 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:14:46.968416 kubelet[2706]: I1213 14:14:46.965289 2706 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:14:46.968416 kubelet[2706]: I1213 14:14:46.965492 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:14:46.977429 kubelet[2706]: I1213 14:14:46.976365 2706 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:14:47.051914 kubelet[2706]: E1213 14:14:47.051866 2706 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:14:47.084838 kubelet[2706]: I1213 14:14:47.084806 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:14:47.085245 kubelet[2706]: I1213 14:14:47.085221 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:14:47.085473 kubelet[2706]: I1213 14:14:47.085357 2706 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:14:47.085977 kubelet[2706]: I1213 14:14:47.085950 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:14:47.086202 kubelet[2706]: I1213 14:14:47.086159 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:14:47.086329 kubelet[2706]: I1213 14:14:47.086309 2706 policy_none.go:49] "None policy: Start" Dec 13 14:14:47.087930 kubelet[2706]: I1213 14:14:47.087900 2706 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:14:47.088117 kubelet[2706]: I1213 14:14:47.088096 2706 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:14:47.088582 kubelet[2706]: I1213 14:14:47.088558 2706 state_mem.go:75] "Updated machine memory state" Dec 13 14:14:47.100095 kubelet[2706]: I1213 14:14:47.100037 2706 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:14:47.103279 kubelet[2706]: I1213 14:14:47.103218 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:14:47.112738 kubelet[2706]: I1213 14:14:47.104095 2706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:14:47.113435 kubelet[2706]: I1213 14:14:47.113391 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:14:47.225813 kubelet[2706]: I1213 14:14:47.225678 2706 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-21-141" Dec 13 14:14:47.235930 kubelet[2706]: I1213 14:14:47.235877 2706 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-21-141" Dec 13 14:14:47.236081 kubelet[2706]: I1213 14:14:47.236010 2706 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-21-141" Dec 13 14:14:47.343654 kubelet[2706]: I1213 14:14:47.343590 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:47.343940 kubelet[2706]: I1213 14:14:47.343909 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:47.344119 kubelet[2706]: I1213 14:14:47.344092 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2142cff50cb60fe77977ffa374187e7-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-141\" (UID: \"c2142cff50cb60fe77977ffa374187e7\") " pod="kube-system/kube-scheduler-ip-172-31-21-141" Dec 13 14:14:47.344276 kubelet[2706]: I1213 14:14:47.344250 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:47.344450 kubelet[2706]: I1213 14:14:47.344418 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:47.344613 kubelet[2706]: I1213 14:14:47.344587 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-ca-certs\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:47.344759 kubelet[2706]: I1213 14:14:47.344734 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd75423d81639e3d145dc0fc8f685d39-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-141\" (UID: \"dd75423d81639e3d145dc0fc8f685d39\") " pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:47.344915 kubelet[2706]: I1213 14:14:47.344889 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:47.345064 kubelet[2706]: I1213 14:14:47.345033 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9dec585eb85cc3ce820cf6a10f10e5c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-141\" (UID: \"9dec585eb85cc3ce820cf6a10f10e5c8\") " pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:47.878418 kubelet[2706]: I1213 14:14:47.878368 2706 apiserver.go:52] "Watching apiserver" Dec 13 14:14:47.938987 kubelet[2706]: I1213 14:14:47.938939 2706 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:14:47.953519 sudo[2720]: pam_unix(sudo:session): session closed for user root Dec 13 14:14:47.961087 kubelet[2706]: I1213 14:14:47.960892 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-141" podStartSLOduration=0.960872387 podStartE2EDuration="960.872387ms" podCreationTimestamp="2024-12-13 14:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:47.960606998 +0000 UTC m=+1.243927779" watchObservedRunningTime="2024-12-13 14:14:47.960872387 +0000 UTC m=+1.244193132" Dec 13 14:14:47.996728 kubelet[2706]: I1213 14:14:47.996645 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-141" podStartSLOduration=0.996618461 podStartE2EDuration="996.618461ms" podCreationTimestamp="2024-12-13 14:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:47.984150619 +0000 UTC m=+1.267471388" watchObservedRunningTime="2024-12-13 14:14:47.996618461 +0000 UTC m=+1.279939218" Dec 13 14:14:48.015654 kubelet[2706]: I1213 14:14:48.015566 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-141" podStartSLOduration=1.015544487 podStartE2EDuration="1.015544487s" podCreationTimestamp="2024-12-13 14:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:47.99755876 +0000 UTC m=+1.280879529" watchObservedRunningTime="2024-12-13 14:14:48.015544487 +0000 UTC m=+1.298865232" Dec 13 14:14:48.063950 kubelet[2706]: E1213 14:14:48.063864 2706 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-21-141\" already exists" pod="kube-system/kube-scheduler-ip-172-31-21-141" Dec 13 14:14:48.065091 kubelet[2706]: E1213 14:14:48.065046 2706 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-141\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-141" Dec 13 14:14:48.065987 kubelet[2706]: E1213 14:14:48.065925 2706 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-141\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-141" Dec 13 14:14:50.990905 kubelet[2706]: I1213 14:14:50.990849 2706 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:14:50.993097 env[1746]: time="2024-12-13T14:14:50.993028263Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:14:50.994107 kubelet[2706]: I1213 14:14:50.994066 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:14:51.919473 systemd[1]: Created slice kubepods-besteffort-pod6a4566e6_34de_472e_87f5_addcbc6ae758.slice. Dec 13 14:14:51.955513 systemd[1]: Created slice kubepods-burstable-pod8868f266_efe6_4915_9761_ebc6f7d23357.slice. Dec 13 14:14:51.957604 amazon-ssm-agent[1725]: 2024-12-13 14:14:51 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:14:51.976660 kubelet[2706]: I1213 14:14:51.976573 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8868f266-efe6-4915-9761-ebc6f7d23357-clustermesh-secrets\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.976660 kubelet[2706]: I1213 14:14:51.976659 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-kernel\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.976926 kubelet[2706]: I1213 14:14:51.976710 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-hostproc\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.976926 kubelet[2706]: I1213 14:14:51.976782 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-net\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.976926 kubelet[2706]: I1213 14:14:51.976867 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a4566e6-34de-472e-87f5-addcbc6ae758-kube-proxy\") pod \"kube-proxy-z7fc8\" (UID: \"6a4566e6-34de-472e-87f5-addcbc6ae758\") " pod="kube-system/kube-proxy-z7fc8" Dec 13 14:14:51.976926 kubelet[2706]: I1213 14:14:51.976916 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-run\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977184 kubelet[2706]: I1213 14:14:51.976966 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-cgroup\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977184 kubelet[2706]: I1213 14:14:51.977014 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-etc-cni-netd\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977184 kubelet[2706]: I1213 14:14:51.977060 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-xtables-lock\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977184 kubelet[2706]: I1213 14:14:51.977102 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-hubble-tls\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977184 kubelet[2706]: I1213 14:14:51.977150 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpzq4\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-kube-api-access-rpzq4\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977527 kubelet[2706]: I1213 14:14:51.977211 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a4566e6-34de-472e-87f5-addcbc6ae758-xtables-lock\") pod \"kube-proxy-z7fc8\" (UID: \"6a4566e6-34de-472e-87f5-addcbc6ae758\") " pod="kube-system/kube-proxy-z7fc8" Dec 13 14:14:51.977527 kubelet[2706]: I1213 14:14:51.977256 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a4566e6-34de-472e-87f5-addcbc6ae758-lib-modules\") pod \"kube-proxy-z7fc8\" (UID: \"6a4566e6-34de-472e-87f5-addcbc6ae758\") " pod="kube-system/kube-proxy-z7fc8" Dec 13 14:14:51.977527 kubelet[2706]: I1213 14:14:51.977298 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmt7\" (UniqueName: \"kubernetes.io/projected/6a4566e6-34de-472e-87f5-addcbc6ae758-kube-api-access-vcmt7\") pod \"kube-proxy-z7fc8\" (UID: \"6a4566e6-34de-472e-87f5-addcbc6ae758\") " pod="kube-system/kube-proxy-z7fc8" Dec 13 14:14:51.977527 kubelet[2706]: I1213 14:14:51.977344 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cni-path\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977527 kubelet[2706]: I1213 14:14:51.977410 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-lib-modules\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977835 kubelet[2706]: I1213 14:14:51.977460 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-config-path\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:51.977835 kubelet[2706]: I1213 14:14:51.977511 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-bpf-maps\") pod \"cilium-mthmf\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " pod="kube-system/cilium-mthmf" Dec 13 14:14:52.054689 kubelet[2706]: E1213 14:14:52.054603 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rpzq4 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-mthmf" podUID="8868f266-efe6-4915-9761-ebc6f7d23357" Dec 13 14:14:52.078966 kubelet[2706]: I1213 14:14:52.078907 2706 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:14:52.182799 systemd[1]: Created slice kubepods-besteffort-podf50ce617_4989_437b_ac3c_3c48e2f2d3f6.slice. Dec 13 14:14:52.233618 env[1746]: time="2024-12-13T14:14:52.232808213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7fc8,Uid:6a4566e6-34de-472e-87f5-addcbc6ae758,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:52.275608 env[1746]: time="2024-12-13T14:14:52.275491163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:52.275928 env[1746]: time="2024-12-13T14:14:52.275860061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:52.276131 env[1746]: time="2024-12-13T14:14:52.276066296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:52.276643 env[1746]: time="2024-12-13T14:14:52.276572152Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b52939442a52fd083b01837865197edc7d96724167ae30657ebb076f67a04020 pid=2771 runtime=io.containerd.runc.v2 Dec 13 14:14:52.279853 kubelet[2706]: I1213 14:14:52.279680 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-cilium-config-path\") pod \"cilium-operator-5d85765b45-6lwk5\" (UID: \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\") " pod="kube-system/cilium-operator-5d85765b45-6lwk5" Dec 13 14:14:52.279853 kubelet[2706]: I1213 14:14:52.279756 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frg8g\" (UniqueName: \"kubernetes.io/projected/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-kube-api-access-frg8g\") pod \"cilium-operator-5d85765b45-6lwk5\" (UID: \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\") " pod="kube-system/cilium-operator-5d85765b45-6lwk5" Dec 13 14:14:52.310880 systemd[1]: Started cri-containerd-b52939442a52fd083b01837865197edc7d96724167ae30657ebb076f67a04020.scope. Dec 13 14:14:52.484524 env[1746]: time="2024-12-13T14:14:52.483632351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z7fc8,Uid:6a4566e6-34de-472e-87f5-addcbc6ae758,Namespace:kube-system,Attempt:0,} returns sandbox id \"b52939442a52fd083b01837865197edc7d96724167ae30657ebb076f67a04020\"" Dec 13 14:14:52.493777 env[1746]: time="2024-12-13T14:14:52.493694965Z" level=info msg="CreateContainer within sandbox \"b52939442a52fd083b01837865197edc7d96724167ae30657ebb076f67a04020\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:14:52.494297 env[1746]: time="2024-12-13T14:14:52.494250900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6lwk5,Uid:f50ce617-4989-437b-ac3c-3c48e2f2d3f6,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:52.530047 env[1746]: time="2024-12-13T14:14:52.529988029Z" level=info msg="CreateContainer within sandbox \"b52939442a52fd083b01837865197edc7d96724167ae30657ebb076f67a04020\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9b58641c4420933aaa86d06e370b7a1432da993cc50e952885b56153d1b83e2b\"" Dec 13 14:14:52.531760 env[1746]: time="2024-12-13T14:14:52.531708939Z" level=info msg="StartContainer for \"9b58641c4420933aaa86d06e370b7a1432da993cc50e952885b56153d1b83e2b\"" Dec 13 14:14:52.539935 env[1746]: time="2024-12-13T14:14:52.538988431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:52.539935 env[1746]: time="2024-12-13T14:14:52.539060313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:52.539935 env[1746]: time="2024-12-13T14:14:52.539086139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:52.540626 env[1746]: time="2024-12-13T14:14:52.540424283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98 pid=2814 runtime=io.containerd.runc.v2 Dec 13 14:14:52.574005 systemd[1]: Started cri-containerd-f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98.scope. Dec 13 14:14:52.591962 systemd[1]: Started cri-containerd-9b58641c4420933aaa86d06e370b7a1432da993cc50e952885b56153d1b83e2b.scope. Dec 13 14:14:52.673845 env[1746]: time="2024-12-13T14:14:52.673765328Z" level=info msg="StartContainer for \"9b58641c4420933aaa86d06e370b7a1432da993cc50e952885b56153d1b83e2b\" returns successfully" Dec 13 14:14:52.687443 env[1746]: time="2024-12-13T14:14:52.686491538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-6lwk5,Uid:f50ce617-4989-437b-ac3c-3c48e2f2d3f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\"" Dec 13 14:14:52.692546 env[1746]: time="2024-12-13T14:14:52.692475617Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085645 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-lib-modules\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085715 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpzq4\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-kube-api-access-rpzq4\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085754 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-xtables-lock\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085754 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085798 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-hubble-tls\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.086510 kubelet[2706]: I1213 14:14:53.085833 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cni-path\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.085873 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-config-path\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.085912 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8868f266-efe6-4915-9761-ebc6f7d23357-clustermesh-secrets\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.085951 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-run\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.085984 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-cgroup\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.086022 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-kernel\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087461 kubelet[2706]: I1213 14:14:53.086069 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-hostproc\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.087820 kubelet[2706]: I1213 14:14:53.086154 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-hostproc" (OuterVolumeSpecName: "hostproc") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.087820 kubelet[2706]: I1213 14:14:53.086201 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.090576 kubelet[2706]: I1213 14:14:53.090514 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-net\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.090770 kubelet[2706]: I1213 14:14:53.090584 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-etc-cni-netd\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.090770 kubelet[2706]: I1213 14:14:53.090623 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-bpf-maps\") pod \"8868f266-efe6-4915-9761-ebc6f7d23357\" (UID: \"8868f266-efe6-4915-9761-ebc6f7d23357\") " Dec 13 14:14:53.092821 kubelet[2706]: I1213 14:14:53.092752 2706 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-lib-modules\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.094880 kubelet[2706]: I1213 14:14:53.094809 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.133456 systemd[1]: var-lib-kubelet-pods-8868f266\x2defe6\x2d4915\x2d9761\x2debc6f7d23357-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpzq4.mount: Deactivated successfully. Dec 13 14:14:53.138166 kubelet[2706]: I1213 14:14:53.095144 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138166 kubelet[2706]: I1213 14:14:53.095209 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138166 kubelet[2706]: I1213 14:14:53.106679 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cni-path" (OuterVolumeSpecName: "cni-path") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138465 kubelet[2706]: I1213 14:14:53.111099 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:14:53.138465 kubelet[2706]: I1213 14:14:53.114241 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138465 kubelet[2706]: I1213 14:14:53.114288 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138465 kubelet[2706]: I1213 14:14:53.114336 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:14:53.138704 kubelet[2706]: I1213 14:14:53.115310 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z7fc8" podStartSLOduration=2.115285851 podStartE2EDuration="2.115285851s" podCreationTimestamp="2024-12-13 14:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:14:53.114690812 +0000 UTC m=+6.398011569" watchObservedRunningTime="2024-12-13 14:14:53.115285851 +0000 UTC m=+6.398606644" Dec 13 14:14:53.140264 kubelet[2706]: I1213 14:14:53.140183 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8868f266-efe6-4915-9761-ebc6f7d23357-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:14:53.142493 kubelet[2706]: I1213 14:14:53.140358 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-kube-api-access-rpzq4" (OuterVolumeSpecName: "kube-api-access-rpzq4") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "kube-api-access-rpzq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:14:53.146572 systemd[1]: var-lib-kubelet-pods-8868f266\x2defe6\x2d4915\x2d9761\x2debc6f7d23357-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:14:53.157515 kubelet[2706]: I1213 14:14:53.157083 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8868f266-efe6-4915-9761-ebc6f7d23357" (UID: "8868f266-efe6-4915-9761-ebc6f7d23357"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:14:53.159687 systemd[1]: var-lib-kubelet-pods-8868f266\x2defe6\x2d4915\x2d9761\x2debc6f7d23357-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:14:53.194230 kubelet[2706]: I1213 14:14:53.194178 2706 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cni-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194230 kubelet[2706]: I1213 14:14:53.194229 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-config-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194255 2706 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8868f266-efe6-4915-9761-ebc6f7d23357-clustermesh-secrets\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194276 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-run\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194298 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-cilium-cgroup\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194319 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-kernel\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194339 2706 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-hostproc\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194360 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-host-proc-sys-net\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194416 2706 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-etc-cni-netd\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194512 kubelet[2706]: I1213 14:14:53.194439 2706 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-bpf-maps\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194993 kubelet[2706]: I1213 14:14:53.194459 2706 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rpzq4\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-kube-api-access-rpzq4\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194993 kubelet[2706]: I1213 14:14:53.194481 2706 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8868f266-efe6-4915-9761-ebc6f7d23357-xtables-lock\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:53.194993 kubelet[2706]: I1213 14:14:53.194503 2706 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8868f266-efe6-4915-9761-ebc6f7d23357-hubble-tls\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:14:54.067748 systemd[1]: Removed slice kubepods-burstable-pod8868f266_efe6_4915_9761_ebc6f7d23357.slice. Dec 13 14:14:54.094420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98226907.mount: Deactivated successfully. Dec 13 14:14:54.149022 systemd[1]: Created slice kubepods-burstable-pod881100f3_24f0_4af6_9db8_5191e46ad111.slice. Dec 13 14:14:54.201067 kubelet[2706]: I1213 14:14:54.201016 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-run\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.201834 kubelet[2706]: I1213 14:14:54.201804 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cni-path\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.202075 kubelet[2706]: I1213 14:14:54.202051 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjg7\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-kube-api-access-ccjg7\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.202266 kubelet[2706]: I1213 14:14:54.202240 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-bpf-maps\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.202493 kubelet[2706]: I1213 14:14:54.202468 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-lib-modules\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.202689 kubelet[2706]: I1213 14:14:54.202664 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-hostproc\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.202877 kubelet[2706]: I1213 14:14:54.202853 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-cgroup\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203041 kubelet[2706]: I1213 14:14:54.203017 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-etc-cni-netd\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203233 kubelet[2706]: I1213 14:14:54.203208 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-xtables-lock\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203426 kubelet[2706]: I1213 14:14:54.203400 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-config-path\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203602 kubelet[2706]: I1213 14:14:54.203577 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-net\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203779 kubelet[2706]: I1213 14:14:54.203756 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-hubble-tls\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.203961 kubelet[2706]: I1213 14:14:54.203936 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/881100f3-24f0-4af6-9db8-5191e46ad111-clustermesh-secrets\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.204215 kubelet[2706]: I1213 14:14:54.204190 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-kernel\") pod \"cilium-dss7k\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " pod="kube-system/cilium-dss7k" Dec 13 14:14:54.457039 env[1746]: time="2024-12-13T14:14:54.456973347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dss7k,Uid:881100f3-24f0-4af6-9db8-5191e46ad111,Namespace:kube-system,Attempt:0,}" Dec 13 14:14:54.492067 env[1746]: time="2024-12-13T14:14:54.491916257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:14:54.492251 env[1746]: time="2024-12-13T14:14:54.492131464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:14:54.492251 env[1746]: time="2024-12-13T14:14:54.492213825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:14:54.492704 env[1746]: time="2024-12-13T14:14:54.492635823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e pid=3028 runtime=io.containerd.runc.v2 Dec 13 14:14:54.522307 systemd[1]: Started cri-containerd-5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e.scope. Dec 13 14:14:54.574659 env[1746]: time="2024-12-13T14:14:54.574585861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dss7k,Uid:881100f3-24f0-4af6-9db8-5191e46ad111,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\"" Dec 13 14:14:54.957924 kubelet[2706]: I1213 14:14:54.957833 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8868f266-efe6-4915-9761-ebc6f7d23357" path="/var/lib/kubelet/pods/8868f266-efe6-4915-9761-ebc6f7d23357/volumes" Dec 13 14:14:55.760506 env[1746]: time="2024-12-13T14:14:55.760440312Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:55.766010 env[1746]: time="2024-12-13T14:14:55.765943241Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:55.770457 env[1746]: time="2024-12-13T14:14:55.770370647Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:14:55.774007 env[1746]: time="2024-12-13T14:14:55.773897214Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 14:14:55.781647 env[1746]: time="2024-12-13T14:14:55.781584215Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:14:55.785432 env[1746]: time="2024-12-13T14:14:55.785333572Z" level=info msg="CreateContainer within sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:14:55.811336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637020313.mount: Deactivated successfully. Dec 13 14:14:55.824458 env[1746]: time="2024-12-13T14:14:55.823363553Z" level=info msg="CreateContainer within sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\"" Dec 13 14:14:55.832707 env[1746]: time="2024-12-13T14:14:55.830503116Z" level=info msg="StartContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\"" Dec 13 14:14:55.873083 systemd[1]: Started cri-containerd-3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f.scope. Dec 13 14:14:55.950541 env[1746]: time="2024-12-13T14:14:55.950451380Z" level=info msg="StartContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" returns successfully" Dec 13 14:15:06.055242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495922081.mount: Deactivated successfully. Dec 13 14:15:10.319984 env[1746]: time="2024-12-13T14:15:10.319854644Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:10.401632 env[1746]: time="2024-12-13T14:15:10.401569654Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:10.441188 env[1746]: time="2024-12-13T14:15:10.441121911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:15:10.442683 env[1746]: time="2024-12-13T14:15:10.442609729Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 14:15:10.449715 env[1746]: time="2024-12-13T14:15:10.449633530Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:15:10.651828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180047459.mount: Deactivated successfully. Dec 13 14:15:10.685246 env[1746]: time="2024-12-13T14:15:10.685154658Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\"" Dec 13 14:15:10.686704 env[1746]: time="2024-12-13T14:15:10.686592637Z" level=info msg="StartContainer for \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\"" Dec 13 14:15:10.728662 systemd[1]: Started cri-containerd-873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757.scope. Dec 13 14:15:10.801888 env[1746]: time="2024-12-13T14:15:10.801803226Z" level=info msg="StartContainer for \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\" returns successfully" Dec 13 14:15:10.820562 systemd[1]: cri-containerd-873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757.scope: Deactivated successfully. Dec 13 14:15:11.076173 env[1746]: time="2024-12-13T14:15:11.076004020Z" level=info msg="shim disconnected" id=873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757 Dec 13 14:15:11.076667 env[1746]: time="2024-12-13T14:15:11.076617163Z" level=warning msg="cleaning up after shim disconnected" id=873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757 namespace=k8s.io Dec 13 14:15:11.076851 env[1746]: time="2024-12-13T14:15:11.076818019Z" level=info msg="cleaning up dead shim" Dec 13 14:15:11.093887 env[1746]: time="2024-12-13T14:15:11.093831205Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3152 runtime=io.containerd.runc.v2\n" Dec 13 14:15:11.139426 env[1746]: time="2024-12-13T14:15:11.135183329Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:15:11.159422 kubelet[2706]: I1213 14:15:11.159296 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-6lwk5" podStartSLOduration=16.069954989 podStartE2EDuration="19.159247829s" podCreationTimestamp="2024-12-13 14:14:52 +0000 UTC" firstStartedPulling="2024-12-13 14:14:52.689605954 +0000 UTC m=+5.972926699" lastFinishedPulling="2024-12-13 14:14:55.778898782 +0000 UTC m=+9.062219539" observedRunningTime="2024-12-13 14:14:56.101934696 +0000 UTC m=+9.385255442" watchObservedRunningTime="2024-12-13 14:15:11.159247829 +0000 UTC m=+24.442568658" Dec 13 14:15:11.165431 env[1746]: time="2024-12-13T14:15:11.165280735Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\"" Dec 13 14:15:11.166511 env[1746]: time="2024-12-13T14:15:11.166437281Z" level=info msg="StartContainer for \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\"" Dec 13 14:15:11.204164 systemd[1]: Started cri-containerd-52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace.scope. Dec 13 14:15:11.267087 env[1746]: time="2024-12-13T14:15:11.267019430Z" level=info msg="StartContainer for \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\" returns successfully" Dec 13 14:15:11.293284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:15:11.294997 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:15:11.295732 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:15:11.306997 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:15:11.307856 systemd[1]: cri-containerd-52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace.scope: Deactivated successfully. Dec 13 14:15:11.328464 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:15:11.369353 env[1746]: time="2024-12-13T14:15:11.369284277Z" level=info msg="shim disconnected" id=52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace Dec 13 14:15:11.370324 env[1746]: time="2024-12-13T14:15:11.370262137Z" level=warning msg="cleaning up after shim disconnected" id=52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace namespace=k8s.io Dec 13 14:15:11.370551 env[1746]: time="2024-12-13T14:15:11.370516613Z" level=info msg="cleaning up dead shim" Dec 13 14:15:11.387219 env[1746]: time="2024-12-13T14:15:11.387157928Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3218 runtime=io.containerd.runc.v2\n" Dec 13 14:15:11.644835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757-rootfs.mount: Deactivated successfully. Dec 13 14:15:12.139024 env[1746]: time="2024-12-13T14:15:12.138931605Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:15:12.175863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3614331392.mount: Deactivated successfully. Dec 13 14:15:12.188999 env[1746]: time="2024-12-13T14:15:12.188892139Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\"" Dec 13 14:15:12.190605 env[1746]: time="2024-12-13T14:15:12.190504534Z" level=info msg="StartContainer for \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\"" Dec 13 14:15:12.250783 systemd[1]: Started cri-containerd-41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198.scope. Dec 13 14:15:12.342855 env[1746]: time="2024-12-13T14:15:12.342745100Z" level=info msg="StartContainer for \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\" returns successfully" Dec 13 14:15:12.347711 systemd[1]: cri-containerd-41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198.scope: Deactivated successfully. Dec 13 14:15:12.405657 env[1746]: time="2024-12-13T14:15:12.405590971Z" level=info msg="shim disconnected" id=41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198 Dec 13 14:15:12.406562 env[1746]: time="2024-12-13T14:15:12.406495879Z" level=warning msg="cleaning up after shim disconnected" id=41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198 namespace=k8s.io Dec 13 14:15:12.406562 env[1746]: time="2024-12-13T14:15:12.406550735Z" level=info msg="cleaning up dead shim" Dec 13 14:15:12.421291 env[1746]: time="2024-12-13T14:15:12.421214680Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" Dec 13 14:15:12.644879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198-rootfs.mount: Deactivated successfully. Dec 13 14:15:13.152744 env[1746]: time="2024-12-13T14:15:13.152443323Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:15:13.182940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223001799.mount: Deactivated successfully. Dec 13 14:15:13.202923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1435931124.mount: Deactivated successfully. Dec 13 14:15:13.209887 env[1746]: time="2024-12-13T14:15:13.209781937Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\"" Dec 13 14:15:13.211660 env[1746]: time="2024-12-13T14:15:13.211600206Z" level=info msg="StartContainer for \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\"" Dec 13 14:15:13.246040 systemd[1]: Started cri-containerd-6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4.scope. Dec 13 14:15:13.317278 systemd[1]: cri-containerd-6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4.scope: Deactivated successfully. Dec 13 14:15:13.321865 env[1746]: time="2024-12-13T14:15:13.321758102Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod881100f3_24f0_4af6_9db8_5191e46ad111.slice/cri-containerd-6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4.scope/memory.events\": no such file or directory" Dec 13 14:15:13.324966 env[1746]: time="2024-12-13T14:15:13.324890712Z" level=info msg="StartContainer for \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\" returns successfully" Dec 13 14:15:13.378397 env[1746]: time="2024-12-13T14:15:13.378300815Z" level=info msg="shim disconnected" id=6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4 Dec 13 14:15:13.378745 env[1746]: time="2024-12-13T14:15:13.378545229Z" level=warning msg="cleaning up after shim disconnected" id=6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4 namespace=k8s.io Dec 13 14:15:13.378745 env[1746]: time="2024-12-13T14:15:13.378586953Z" level=info msg="cleaning up dead shim" Dec 13 14:15:13.395301 env[1746]: time="2024-12-13T14:15:13.395230577Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:15:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3331 runtime=io.containerd.runc.v2\n" Dec 13 14:15:14.161761 env[1746]: time="2024-12-13T14:15:14.161689566Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:15:14.214071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154044929.mount: Deactivated successfully. Dec 13 14:15:14.222105 env[1746]: time="2024-12-13T14:15:14.221994837Z" level=info msg="CreateContainer within sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\"" Dec 13 14:15:14.223689 env[1746]: time="2024-12-13T14:15:14.223624980Z" level=info msg="StartContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\"" Dec 13 14:15:14.273776 systemd[1]: Started cri-containerd-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0.scope. Dec 13 14:15:14.381962 env[1746]: time="2024-12-13T14:15:14.381890506Z" level=info msg="StartContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" returns successfully" Dec 13 14:15:14.571095 kubelet[2706]: I1213 14:15:14.570268 2706 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:15:14.626211 systemd[1]: Created slice kubepods-burstable-pod4a3250d2_7fb2_474a_9afd_98791cad70fc.slice. Dec 13 14:15:14.639326 systemd[1]: Created slice kubepods-burstable-pod91923059_528a_4973_a83c_d103af26a689.slice. Dec 13 14:15:14.670101 kubelet[2706]: I1213 14:15:14.670044 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7b56\" (UniqueName: \"kubernetes.io/projected/91923059-528a-4973-a83c-d103af26a689-kube-api-access-w7b56\") pod \"coredns-6f6b679f8f-fgk6v\" (UID: \"91923059-528a-4973-a83c-d103af26a689\") " pod="kube-system/coredns-6f6b679f8f-fgk6v" Dec 13 14:15:14.670493 kubelet[2706]: I1213 14:15:14.670452 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4a3250d2-7fb2-474a-9afd-98791cad70fc-config-volume\") pod \"coredns-6f6b679f8f-khzp9\" (UID: \"4a3250d2-7fb2-474a-9afd-98791cad70fc\") " pod="kube-system/coredns-6f6b679f8f-khzp9" Dec 13 14:15:14.670768 kubelet[2706]: I1213 14:15:14.670735 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqh9\" (UniqueName: \"kubernetes.io/projected/4a3250d2-7fb2-474a-9afd-98791cad70fc-kube-api-access-xtqh9\") pod \"coredns-6f6b679f8f-khzp9\" (UID: \"4a3250d2-7fb2-474a-9afd-98791cad70fc\") " pod="kube-system/coredns-6f6b679f8f-khzp9" Dec 13 14:15:14.670971 kubelet[2706]: I1213 14:15:14.670941 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91923059-528a-4973-a83c-d103af26a689-config-volume\") pod \"coredns-6f6b679f8f-fgk6v\" (UID: \"91923059-528a-4973-a83c-d103af26a689\") " pod="kube-system/coredns-6f6b679f8f-fgk6v" Dec 13 14:15:14.683464 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:14.936455 env[1746]: time="2024-12-13T14:15:14.936334849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khzp9,Uid:4a3250d2-7fb2-474a-9afd-98791cad70fc,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:14.971903 env[1746]: time="2024-12-13T14:15:14.971492100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgk6v,Uid:91923059-528a-4973-a83c-d103af26a689,Namespace:kube-system,Attempt:0,}" Dec 13 14:15:15.569428 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Dec 13 14:15:16.412190 systemd[1]: run-containerd-runc-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-runc.NeXRfh.mount: Deactivated successfully. Dec 13 14:15:17.382941 (udev-worker)[3458]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:17.391054 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:15:17.391128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:15:17.387348 systemd-networkd[1462]: cilium_host: Link UP Dec 13 14:15:17.387646 systemd-networkd[1462]: cilium_net: Link UP Dec 13 14:15:17.387961 systemd-networkd[1462]: cilium_net: Gained carrier Dec 13 14:15:17.389469 (udev-worker)[3519]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:17.392559 systemd-networkd[1462]: cilium_host: Gained carrier Dec 13 14:15:17.563838 (udev-worker)[3533]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:15:17.569025 systemd-networkd[1462]: cilium_host: Gained IPv6LL Dec 13 14:15:17.581872 systemd-networkd[1462]: cilium_vxlan: Link UP Dec 13 14:15:17.582083 systemd-networkd[1462]: cilium_vxlan: Gained carrier Dec 13 14:15:17.673120 systemd-networkd[1462]: cilium_net: Gained IPv6LL Dec 13 14:15:18.059449 kernel: NET: Registered PF_ALG protocol family Dec 13 14:15:18.642651 systemd[1]: run-containerd-runc-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-runc.vzef4C.mount: Deactivated successfully. Dec 13 14:15:19.057147 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Dec 13 14:15:19.609739 systemd-networkd[1462]: lxc_health: Link UP Dec 13 14:15:19.625664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:15:19.625251 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 14:15:20.044865 systemd-networkd[1462]: lxc2f4bdbd20df4: Link UP Dec 13 14:15:20.051804 kernel: eth0: renamed from tmp7d676 Dec 13 14:15:20.060549 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2f4bdbd20df4: link becomes ready Dec 13 14:15:20.059257 systemd-networkd[1462]: lxc2f4bdbd20df4: Gained carrier Dec 13 14:15:20.105345 systemd-networkd[1462]: lxcb29c9b0044d2: Link UP Dec 13 14:15:20.124636 kernel: eth0: renamed from tmp47779 Dec 13 14:15:20.153619 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb29c9b0044d2: link becomes ready Dec 13 14:15:20.153318 systemd-networkd[1462]: lxcb29c9b0044d2: Gained carrier Dec 13 14:15:20.548205 kubelet[2706]: I1213 14:15:20.548082 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dss7k" podStartSLOduration=10.680007203 podStartE2EDuration="26.548056507s" podCreationTimestamp="2024-12-13 14:14:54 +0000 UTC" firstStartedPulling="2024-12-13 14:14:54.577273625 +0000 UTC m=+7.860594370" lastFinishedPulling="2024-12-13 14:15:10.445322929 +0000 UTC m=+23.728643674" observedRunningTime="2024-12-13 14:15:15.199874672 +0000 UTC m=+28.483195441" watchObservedRunningTime="2024-12-13 14:15:20.548056507 +0000 UTC m=+33.831377288" Dec 13 14:15:20.914207 systemd[1]: run-containerd-runc-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-runc.PteWUy.mount: Deactivated successfully. Dec 13 14:15:21.105266 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 14:15:21.169096 systemd-networkd[1462]: lxc2f4bdbd20df4: Gained IPv6LL Dec 13 14:15:22.129268 systemd-networkd[1462]: lxcb29c9b0044d2: Gained IPv6LL Dec 13 14:15:23.222931 systemd[1]: run-containerd-runc-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-runc.hrTZd2.mount: Deactivated successfully. Dec 13 14:15:25.466116 systemd[1]: run-containerd-runc-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-runc.ip3qaX.mount: Deactivated successfully. Dec 13 14:15:25.824839 sudo[1988]: pam_unix(sudo:session): session closed for user root Dec 13 14:15:25.851003 sshd[1985]: pam_unix(sshd:session): session closed for user core Dec 13 14:15:25.857239 systemd[1]: sshd@4-172.31.21.141:22-139.178.89.65:56850.service: Deactivated successfully. Dec 13 14:15:25.858861 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:15:25.859245 systemd[1]: session-5.scope: Consumed 12.880s CPU time. Dec 13 14:15:25.862711 systemd-logind[1737]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:15:25.864946 systemd-logind[1737]: Removed session 5. Dec 13 14:15:29.259932 env[1746]: time="2024-12-13T14:15:29.259797871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:29.260565 env[1746]: time="2024-12-13T14:15:29.259961120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:29.260565 env[1746]: time="2024-12-13T14:15:29.260029548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:29.261135 env[1746]: time="2024-12-13T14:15:29.261007737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366 pid=3997 runtime=io.containerd.runc.v2 Dec 13 14:15:29.334726 systemd[1]: run-containerd-runc-k8s.io-7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366-runc.NhYgcI.mount: Deactivated successfully. Dec 13 14:15:29.340742 systemd[1]: Started cri-containerd-7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366.scope. Dec 13 14:15:29.374910 env[1746]: time="2024-12-13T14:15:29.374776314Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:15:29.375421 env[1746]: time="2024-12-13T14:15:29.375315346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:15:29.375803 env[1746]: time="2024-12-13T14:15:29.375711881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:15:29.376774 env[1746]: time="2024-12-13T14:15:29.376675919Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84 pid=4028 runtime=io.containerd.runc.v2 Dec 13 14:15:29.435536 systemd[1]: Started cri-containerd-47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84.scope. Dec 13 14:15:29.512685 env[1746]: time="2024-12-13T14:15:29.512529416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-khzp9,Uid:4a3250d2-7fb2-474a-9afd-98791cad70fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366\"" Dec 13 14:15:29.519437 env[1746]: time="2024-12-13T14:15:29.519347300Z" level=info msg="CreateContainer within sandbox \"7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:29.564957 env[1746]: time="2024-12-13T14:15:29.564863036Z" level=info msg="CreateContainer within sandbox \"7d67671044327bdc56b9294c8f64677c65f9b53b3c81dcecb449a0c7ceddd366\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe13a57e00a547ff94ce2ff13ed8f1509da2c8cf0ea286681ff1e0c3ced45581\"" Dec 13 14:15:29.567963 env[1746]: time="2024-12-13T14:15:29.567903433Z" level=info msg="StartContainer for \"fe13a57e00a547ff94ce2ff13ed8f1509da2c8cf0ea286681ff1e0c3ced45581\"" Dec 13 14:15:29.589686 env[1746]: time="2024-12-13T14:15:29.589613099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgk6v,Uid:91923059-528a-4973-a83c-d103af26a689,Namespace:kube-system,Attempt:0,} returns sandbox id \"47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84\"" Dec 13 14:15:29.602512 env[1746]: time="2024-12-13T14:15:29.602430769Z" level=info msg="CreateContainer within sandbox \"47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:15:29.628113 env[1746]: time="2024-12-13T14:15:29.628029788Z" level=info msg="CreateContainer within sandbox \"47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b737f227371e78005e61bdba7654d055b8662564e15f0dba39ddfe86c349c92\"" Dec 13 14:15:29.631536 env[1746]: time="2024-12-13T14:15:29.631457646Z" level=info msg="StartContainer for \"9b737f227371e78005e61bdba7654d055b8662564e15f0dba39ddfe86c349c92\"" Dec 13 14:15:29.650531 systemd[1]: Started cri-containerd-fe13a57e00a547ff94ce2ff13ed8f1509da2c8cf0ea286681ff1e0c3ced45581.scope. Dec 13 14:15:29.703658 systemd[1]: Started cri-containerd-9b737f227371e78005e61bdba7654d055b8662564e15f0dba39ddfe86c349c92.scope. Dec 13 14:15:29.808913 env[1746]: time="2024-12-13T14:15:29.808735364Z" level=info msg="StartContainer for \"fe13a57e00a547ff94ce2ff13ed8f1509da2c8cf0ea286681ff1e0c3ced45581\" returns successfully" Dec 13 14:15:29.820534 env[1746]: time="2024-12-13T14:15:29.820372718Z" level=info msg="StartContainer for \"9b737f227371e78005e61bdba7654d055b8662564e15f0dba39ddfe86c349c92\" returns successfully" Dec 13 14:15:30.256955 kubelet[2706]: I1213 14:15:30.256819 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fgk6v" podStartSLOduration=38.256793135 podStartE2EDuration="38.256793135s" podCreationTimestamp="2024-12-13 14:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:30.231430188 +0000 UTC m=+43.514750969" watchObservedRunningTime="2024-12-13 14:15:30.256793135 +0000 UTC m=+43.540113892" Dec 13 14:15:30.277004 systemd[1]: run-containerd-runc-k8s.io-47779e1e33fa92573a0468fca399cc591626f97e4f581cfac3b1c14e8ac01c84-runc.ONtSKq.mount: Deactivated successfully. Dec 13 14:15:30.305836 kubelet[2706]: I1213 14:15:30.305755 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-khzp9" podStartSLOduration=38.305713961 podStartE2EDuration="38.305713961s" podCreationTimestamp="2024-12-13 14:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:15:30.258207899 +0000 UTC m=+43.541528656" watchObservedRunningTime="2024-12-13 14:15:30.305713961 +0000 UTC m=+43.589034706" Dec 13 14:16:06.178649 systemd[1]: Started sshd@5-172.31.21.141:22-139.178.89.65:33538.service. Dec 13 14:16:06.350526 sshd[4168]: Accepted publickey for core from 139.178.89.65 port 33538 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:06.352774 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:06.361545 systemd[1]: Started session-6.scope. Dec 13 14:16:06.362527 systemd-logind[1737]: New session 6 of user core. Dec 13 14:16:06.617925 sshd[4168]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:06.622781 systemd-logind[1737]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:16:06.623358 systemd[1]: sshd@5-172.31.21.141:22-139.178.89.65:33538.service: Deactivated successfully. Dec 13 14:16:06.624717 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:16:06.626608 systemd-logind[1737]: Removed session 6. Dec 13 14:16:11.647617 systemd[1]: Started sshd@6-172.31.21.141:22-139.178.89.65:38124.service. Dec 13 14:16:11.818810 sshd[4181]: Accepted publickey for core from 139.178.89.65 port 38124 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:11.822315 sshd[4181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:11.831251 systemd[1]: Started session-7.scope. Dec 13 14:16:11.832540 systemd-logind[1737]: New session 7 of user core. Dec 13 14:16:12.073982 sshd[4181]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:12.079286 systemd[1]: sshd@6-172.31.21.141:22-139.178.89.65:38124.service: Deactivated successfully. Dec 13 14:16:12.080672 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:16:12.081958 systemd-logind[1737]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:16:12.084236 systemd-logind[1737]: Removed session 7. Dec 13 14:16:17.102059 systemd[1]: Started sshd@7-172.31.21.141:22-139.178.89.65:38138.service. Dec 13 14:16:17.268935 sshd[4193]: Accepted publickey for core from 139.178.89.65 port 38138 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:17.272066 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:17.279551 systemd-logind[1737]: New session 8 of user core. Dec 13 14:16:17.280806 systemd[1]: Started session-8.scope. Dec 13 14:16:17.524946 sshd[4193]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:17.529837 systemd-logind[1737]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:16:17.531315 systemd[1]: sshd@7-172.31.21.141:22-139.178.89.65:38138.service: Deactivated successfully. Dec 13 14:16:17.532686 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:16:17.534895 systemd-logind[1737]: Removed session 8. Dec 13 14:16:22.553717 systemd[1]: Started sshd@8-172.31.21.141:22-139.178.89.65:38298.service. Dec 13 14:16:22.724306 sshd[4205]: Accepted publickey for core from 139.178.89.65 port 38298 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:22.726897 sshd[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:22.735732 systemd[1]: Started session-9.scope. Dec 13 14:16:22.736552 systemd-logind[1737]: New session 9 of user core. Dec 13 14:16:23.000284 sshd[4205]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:23.005720 systemd[1]: sshd@8-172.31.21.141:22-139.178.89.65:38298.service: Deactivated successfully. Dec 13 14:16:23.008582 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:16:23.010133 systemd-logind[1737]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:16:23.012048 systemd-logind[1737]: Removed session 9. Dec 13 14:16:26.732667 amazon-ssm-agent[1725]: 2024-12-13 14:16:26 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:16:28.029019 systemd[1]: Started sshd@9-172.31.21.141:22-139.178.89.65:42206.service. Dec 13 14:16:28.197751 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 42206 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:28.201284 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:28.208508 systemd-logind[1737]: New session 10 of user core. Dec 13 14:16:28.209820 systemd[1]: Started session-10.scope. Dec 13 14:16:28.462784 sshd[4222]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:28.467365 systemd-logind[1737]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:16:28.467551 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:16:28.469319 systemd[1]: sshd@9-172.31.21.141:22-139.178.89.65:42206.service: Deactivated successfully. Dec 13 14:16:28.472022 systemd-logind[1737]: Removed session 10. Dec 13 14:16:28.491163 systemd[1]: Started sshd@10-172.31.21.141:22-139.178.89.65:42216.service. Dec 13 14:16:28.662865 sshd[4235]: Accepted publickey for core from 139.178.89.65 port 42216 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:28.665996 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:28.674605 systemd[1]: Started session-11.scope. Dec 13 14:16:28.676007 systemd-logind[1737]: New session 11 of user core. Dec 13 14:16:29.014510 sshd[4235]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:29.020788 systemd[1]: sshd@10-172.31.21.141:22-139.178.89.65:42216.service: Deactivated successfully. Dec 13 14:16:29.022218 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:16:29.025520 systemd-logind[1737]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:16:29.028128 systemd-logind[1737]: Removed session 11. Dec 13 14:16:29.054897 systemd[1]: Started sshd@11-172.31.21.141:22-139.178.89.65:42220.service. Dec 13 14:16:29.219016 sshd[4245]: Accepted publickey for core from 139.178.89.65 port 42220 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:29.221608 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:29.230501 systemd[1]: Started session-12.scope. Dec 13 14:16:29.231485 systemd-logind[1737]: New session 12 of user core. Dec 13 14:16:29.485263 sshd[4245]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:29.490482 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:16:29.491656 systemd[1]: sshd@11-172.31.21.141:22-139.178.89.65:42220.service: Deactivated successfully. Dec 13 14:16:29.493502 systemd-logind[1737]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:16:29.495580 systemd-logind[1737]: Removed session 12. Dec 13 14:16:34.518718 systemd[1]: Started sshd@12-172.31.21.141:22-139.178.89.65:42222.service. Dec 13 14:16:34.691564 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 42222 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:34.694634 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:34.703237 systemd[1]: Started session-13.scope. Dec 13 14:16:34.704228 systemd-logind[1737]: New session 13 of user core. Dec 13 14:16:34.956285 sshd[4257]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:34.963207 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:16:34.965331 systemd[1]: sshd@12-172.31.21.141:22-139.178.89.65:42222.service: Deactivated successfully. Dec 13 14:16:34.966935 systemd-logind[1737]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:16:34.969156 systemd-logind[1737]: Removed session 13. Dec 13 14:16:39.984815 systemd[1]: Started sshd@13-172.31.21.141:22-139.178.89.65:38158.service. Dec 13 14:16:40.155729 sshd[4271]: Accepted publickey for core from 139.178.89.65 port 38158 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:40.159314 sshd[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:40.168367 systemd[1]: Started session-14.scope. Dec 13 14:16:40.169650 systemd-logind[1737]: New session 14 of user core. Dec 13 14:16:40.419058 sshd[4271]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:40.424521 systemd[1]: sshd@13-172.31.21.141:22-139.178.89.65:38158.service: Deactivated successfully. Dec 13 14:16:40.425825 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:16:40.428496 systemd-logind[1737]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:16:40.430433 systemd-logind[1737]: Removed session 14. Dec 13 14:16:45.447110 systemd[1]: Started sshd@14-172.31.21.141:22-139.178.89.65:38168.service. Dec 13 14:16:45.613974 sshd[4283]: Accepted publickey for core from 139.178.89.65 port 38168 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:45.617069 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:45.624571 systemd-logind[1737]: New session 15 of user core. Dec 13 14:16:45.626149 systemd[1]: Started session-15.scope. Dec 13 14:16:45.874657 sshd[4283]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:45.881015 systemd[1]: sshd@14-172.31.21.141:22-139.178.89.65:38168.service: Deactivated successfully. Dec 13 14:16:45.882327 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:16:45.883710 systemd-logind[1737]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:16:45.885330 systemd-logind[1737]: Removed session 15. Dec 13 14:16:50.903213 systemd[1]: Started sshd@15-172.31.21.141:22-139.178.89.65:54572.service. Dec 13 14:16:51.071545 sshd[4297]: Accepted publickey for core from 139.178.89.65 port 54572 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:51.074179 sshd[4297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:51.081634 systemd-logind[1737]: New session 16 of user core. Dec 13 14:16:51.083817 systemd[1]: Started session-16.scope. Dec 13 14:16:51.334512 sshd[4297]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:51.339424 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:16:51.340552 systemd[1]: sshd@15-172.31.21.141:22-139.178.89.65:54572.service: Deactivated successfully. Dec 13 14:16:51.342340 systemd-logind[1737]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:16:51.344932 systemd-logind[1737]: Removed session 16. Dec 13 14:16:51.365086 systemd[1]: Started sshd@16-172.31.21.141:22-139.178.89.65:54582.service. Dec 13 14:16:51.535350 sshd[4309]: Accepted publickey for core from 139.178.89.65 port 54582 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:51.538089 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:51.547059 systemd[1]: Started session-17.scope. Dec 13 14:16:51.547852 systemd-logind[1737]: New session 17 of user core. Dec 13 14:16:51.843916 sshd[4309]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:51.849010 systemd-logind[1737]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:16:51.850645 systemd[1]: sshd@16-172.31.21.141:22-139.178.89.65:54582.service: Deactivated successfully. Dec 13 14:16:51.851991 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:16:51.853085 systemd-logind[1737]: Removed session 17. Dec 13 14:16:51.873216 systemd[1]: Started sshd@17-172.31.21.141:22-139.178.89.65:54594.service. Dec 13 14:16:52.043359 sshd[4318]: Accepted publickey for core from 139.178.89.65 port 54594 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:52.046007 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:52.053468 systemd-logind[1737]: New session 18 of user core. Dec 13 14:16:52.055176 systemd[1]: Started session-18.scope. Dec 13 14:16:54.616553 sshd[4318]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:54.623095 systemd-logind[1737]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:16:54.624369 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:16:54.625668 systemd[1]: sshd@17-172.31.21.141:22-139.178.89.65:54594.service: Deactivated successfully. Dec 13 14:16:54.629106 systemd-logind[1737]: Removed session 18. Dec 13 14:16:54.652074 systemd[1]: Started sshd@18-172.31.21.141:22-139.178.89.65:54602.service. Dec 13 14:16:54.823601 sshd[4338]: Accepted publickey for core from 139.178.89.65 port 54602 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:54.826122 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:54.835477 systemd-logind[1737]: New session 19 of user core. Dec 13 14:16:54.835534 systemd[1]: Started session-19.scope. Dec 13 14:16:55.334080 sshd[4338]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:55.340273 systemd[1]: sshd@18-172.31.21.141:22-139.178.89.65:54602.service: Deactivated successfully. Dec 13 14:16:55.341619 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:16:55.343077 systemd-logind[1737]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:16:55.344843 systemd-logind[1737]: Removed session 19. Dec 13 14:16:55.364076 systemd[1]: Started sshd@19-172.31.21.141:22-139.178.89.65:54616.service. Dec 13 14:16:55.539167 sshd[4347]: Accepted publickey for core from 139.178.89.65 port 54616 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:16:55.541733 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:16:55.550876 systemd[1]: Started session-20.scope. Dec 13 14:16:55.551925 systemd-logind[1737]: New session 20 of user core. Dec 13 14:16:55.797885 sshd[4347]: pam_unix(sshd:session): session closed for user core Dec 13 14:16:55.802980 systemd-logind[1737]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:16:55.804453 systemd[1]: sshd@19-172.31.21.141:22-139.178.89.65:54616.service: Deactivated successfully. Dec 13 14:16:55.806476 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:16:55.808459 systemd-logind[1737]: Removed session 20. Dec 13 14:17:00.826941 systemd[1]: Started sshd@20-172.31.21.141:22-139.178.89.65:40136.service. Dec 13 14:17:00.996484 sshd[4359]: Accepted publickey for core from 139.178.89.65 port 40136 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:00.998220 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:01.006893 systemd[1]: Started session-21.scope. Dec 13 14:17:01.007669 systemd-logind[1737]: New session 21 of user core. Dec 13 14:17:01.242639 sshd[4359]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:01.247623 systemd-logind[1737]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:17:01.248203 systemd[1]: sshd@20-172.31.21.141:22-139.178.89.65:40136.service: Deactivated successfully. Dec 13 14:17:01.249480 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:17:01.251613 systemd-logind[1737]: Removed session 21. Dec 13 14:17:06.270914 systemd[1]: Started sshd@21-172.31.21.141:22-139.178.89.65:40142.service. Dec 13 14:17:06.435096 sshd[4374]: Accepted publickey for core from 139.178.89.65 port 40142 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:06.437659 sshd[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:06.446724 systemd[1]: Started session-22.scope. Dec 13 14:17:06.448028 systemd-logind[1737]: New session 22 of user core. Dec 13 14:17:06.690141 sshd[4374]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:06.695203 systemd-logind[1737]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:17:06.695675 systemd[1]: sshd@21-172.31.21.141:22-139.178.89.65:40142.service: Deactivated successfully. Dec 13 14:17:06.697017 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:17:06.698634 systemd-logind[1737]: Removed session 22. Dec 13 14:17:11.720096 systemd[1]: Started sshd@22-172.31.21.141:22-139.178.89.65:35808.service. Dec 13 14:17:11.886479 sshd[4386]: Accepted publickey for core from 139.178.89.65 port 35808 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:11.889590 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:11.899240 systemd[1]: Started session-23.scope. Dec 13 14:17:11.901491 systemd-logind[1737]: New session 23 of user core. Dec 13 14:17:12.150765 sshd[4386]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:12.156169 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:17:12.157799 systemd-logind[1737]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:17:12.158129 systemd[1]: sshd@22-172.31.21.141:22-139.178.89.65:35808.service: Deactivated successfully. Dec 13 14:17:12.161954 systemd-logind[1737]: Removed session 23. Dec 13 14:17:17.181121 systemd[1]: Started sshd@23-172.31.21.141:22-139.178.89.65:35820.service. Dec 13 14:17:17.350520 sshd[4398]: Accepted publickey for core from 139.178.89.65 port 35820 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:17.352994 sshd[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:17.360931 systemd-logind[1737]: New session 24 of user core. Dec 13 14:17:17.361786 systemd[1]: Started session-24.scope. Dec 13 14:17:17.599119 sshd[4398]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:17.603868 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:17:17.605035 systemd[1]: sshd@23-172.31.21.141:22-139.178.89.65:35820.service: Deactivated successfully. Dec 13 14:17:17.606836 systemd-logind[1737]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:17:17.609910 systemd-logind[1737]: Removed session 24. Dec 13 14:17:17.629841 systemd[1]: Started sshd@24-172.31.21.141:22-139.178.89.65:35824.service. Dec 13 14:17:17.798822 sshd[4410]: Accepted publickey for core from 139.178.89.65 port 35824 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:17.801416 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:17.809489 systemd-logind[1737]: New session 25 of user core. Dec 13 14:17:17.810325 systemd[1]: Started session-25.scope. Dec 13 14:17:19.980125 env[1746]: time="2024-12-13T14:17:19.980047293Z" level=info msg="StopContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" with timeout 30 (s)" Dec 13 14:17:19.983861 env[1746]: time="2024-12-13T14:17:19.983800490Z" level=info msg="Stop container \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" with signal terminated" Dec 13 14:17:20.021002 systemd[1]: cri-containerd-3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f.scope: Deactivated successfully. Dec 13 14:17:20.024332 env[1746]: time="2024-12-13T14:17:20.024230934Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:17:20.042596 env[1746]: time="2024-12-13T14:17:20.042538236Z" level=info msg="StopContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" with timeout 2 (s)" Dec 13 14:17:20.043744 env[1746]: time="2024-12-13T14:17:20.043690699Z" level=info msg="Stop container \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" with signal terminated" Dec 13 14:17:20.071859 systemd-networkd[1462]: lxc_health: Link DOWN Dec 13 14:17:20.071882 systemd-networkd[1462]: lxc_health: Lost carrier Dec 13 14:17:20.092248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f-rootfs.mount: Deactivated successfully. Dec 13 14:17:20.110230 systemd[1]: cri-containerd-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0.scope: Deactivated successfully. Dec 13 14:17:20.110806 systemd[1]: cri-containerd-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0.scope: Consumed 15.554s CPU time. Dec 13 14:17:20.129267 env[1746]: time="2024-12-13T14:17:20.129189417Z" level=info msg="shim disconnected" id=3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f Dec 13 14:17:20.129267 env[1746]: time="2024-12-13T14:17:20.129257777Z" level=warning msg="cleaning up after shim disconnected" id=3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f namespace=k8s.io Dec 13 14:17:20.129698 env[1746]: time="2024-12-13T14:17:20.129281024Z" level=info msg="cleaning up dead shim" Dec 13 14:17:20.148905 env[1746]: time="2024-12-13T14:17:20.148829312Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4471 runtime=io.containerd.runc.v2\n" Dec 13 14:17:20.152359 env[1746]: time="2024-12-13T14:17:20.152296721Z" level=info msg="StopContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" returns successfully" Dec 13 14:17:20.153794 env[1746]: time="2024-12-13T14:17:20.153745583Z" level=info msg="StopPodSandbox for \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\"" Dec 13 14:17:20.154084 env[1746]: time="2024-12-13T14:17:20.154047131Z" level=info msg="Container to stop \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.157639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98-shm.mount: Deactivated successfully. Dec 13 14:17:20.167700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0-rootfs.mount: Deactivated successfully. Dec 13 14:17:20.176502 systemd[1]: cri-containerd-f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98.scope: Deactivated successfully. Dec 13 14:17:20.181865 env[1746]: time="2024-12-13T14:17:20.181764226Z" level=info msg="shim disconnected" id=301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0 Dec 13 14:17:20.182145 env[1746]: time="2024-12-13T14:17:20.182110096Z" level=warning msg="cleaning up after shim disconnected" id=301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0 namespace=k8s.io Dec 13 14:17:20.182298 env[1746]: time="2024-12-13T14:17:20.182269451Z" level=info msg="cleaning up dead shim" Dec 13 14:17:20.201525 env[1746]: time="2024-12-13T14:17:20.201465808Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4496 runtime=io.containerd.runc.v2\n" Dec 13 14:17:20.205196 env[1746]: time="2024-12-13T14:17:20.205130893Z" level=info msg="StopContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" returns successfully" Dec 13 14:17:20.206059 env[1746]: time="2024-12-13T14:17:20.205986644Z" level=info msg="StopPodSandbox for \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\"" Dec 13 14:17:20.206214 env[1746]: time="2024-12-13T14:17:20.206085764Z" level=info msg="Container to stop \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.206214 env[1746]: time="2024-12-13T14:17:20.206117628Z" level=info msg="Container to stop \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.206214 env[1746]: time="2024-12-13T14:17:20.206148184Z" level=info msg="Container to stop \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.206214 env[1746]: time="2024-12-13T14:17:20.206176231Z" level=info msg="Container to stop \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.206214 env[1746]: time="2024-12-13T14:17:20.206203342Z" level=info msg="Container to stop \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:20.223668 systemd[1]: cri-containerd-5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e.scope: Deactivated successfully. Dec 13 14:17:20.237469 env[1746]: time="2024-12-13T14:17:20.235475752Z" level=info msg="shim disconnected" id=f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98 Dec 13 14:17:20.237469 env[1746]: time="2024-12-13T14:17:20.235549501Z" level=warning msg="cleaning up after shim disconnected" id=f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98 namespace=k8s.io Dec 13 14:17:20.237469 env[1746]: time="2024-12-13T14:17:20.235572375Z" level=info msg="cleaning up dead shim" Dec 13 14:17:20.264855 env[1746]: time="2024-12-13T14:17:20.264788006Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4531 runtime=io.containerd.runc.v2\n" Dec 13 14:17:20.265484 env[1746]: time="2024-12-13T14:17:20.265364472Z" level=info msg="TearDown network for sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" successfully" Dec 13 14:17:20.265484 env[1746]: time="2024-12-13T14:17:20.265484258Z" level=info msg="StopPodSandbox for \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" returns successfully" Dec 13 14:17:20.289467 env[1746]: time="2024-12-13T14:17:20.289365859Z" level=info msg="shim disconnected" id=5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e Dec 13 14:17:20.289819 env[1746]: time="2024-12-13T14:17:20.289785778Z" level=warning msg="cleaning up after shim disconnected" id=5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e namespace=k8s.io Dec 13 14:17:20.289972 env[1746]: time="2024-12-13T14:17:20.289944293Z" level=info msg="cleaning up dead shim" Dec 13 14:17:20.304803 kubelet[2706]: I1213 14:17:20.304731 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-cilium-config-path\") pod \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\" (UID: \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\") " Dec 13 14:17:20.305433 kubelet[2706]: I1213 14:17:20.304816 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frg8g\" (UniqueName: \"kubernetes.io/projected/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-kube-api-access-frg8g\") pod \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\" (UID: \"f50ce617-4989-437b-ac3c-3c48e2f2d3f6\") " Dec 13 14:17:20.312127 kubelet[2706]: I1213 14:17:20.312064 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f50ce617-4989-437b-ac3c-3c48e2f2d3f6" (UID: "f50ce617-4989-437b-ac3c-3c48e2f2d3f6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:17:20.315054 kubelet[2706]: I1213 14:17:20.314982 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-kube-api-access-frg8g" (OuterVolumeSpecName: "kube-api-access-frg8g") pod "f50ce617-4989-437b-ac3c-3c48e2f2d3f6" (UID: "f50ce617-4989-437b-ac3c-3c48e2f2d3f6"). InnerVolumeSpecName "kube-api-access-frg8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:20.315550 env[1746]: time="2024-12-13T14:17:20.315502904Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4556 runtime=io.containerd.runc.v2\n" Dec 13 14:17:20.316235 env[1746]: time="2024-12-13T14:17:20.316192519Z" level=info msg="TearDown network for sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" successfully" Dec 13 14:17:20.316371 env[1746]: time="2024-12-13T14:17:20.316337952Z" level=info msg="StopPodSandbox for \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" returns successfully" Dec 13 14:17:20.405262 kubelet[2706]: I1213 14:17:20.405092 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-run\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.405661 kubelet[2706]: I1213 14:17:20.405568 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-net\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.405859 kubelet[2706]: I1213 14:17:20.405822 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-config-path\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406547 kubelet[2706]: I1213 14:17:20.406475 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjg7\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-kube-api-access-ccjg7\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406672 kubelet[2706]: I1213 14:17:20.406559 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-bpf-maps\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406672 kubelet[2706]: I1213 14:17:20.406604 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-kernel\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406672 kubelet[2706]: I1213 14:17:20.406638 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-cgroup\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406671 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-etc-cni-netd\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406704 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-lib-modules\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406738 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cni-path\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406770 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-hostproc\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406809 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/881100f3-24f0-4af6-9db8-5191e46ad111-clustermesh-secrets\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.406861 kubelet[2706]: I1213 14:17:20.406845 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-xtables-lock\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.407226 kubelet[2706]: I1213 14:17:20.406881 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-hubble-tls\") pod \"881100f3-24f0-4af6-9db8-5191e46ad111\" (UID: \"881100f3-24f0-4af6-9db8-5191e46ad111\") " Dec 13 14:17:20.407226 kubelet[2706]: I1213 14:17:20.406943 2706 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-frg8g\" (UniqueName: \"kubernetes.io/projected/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-kube-api-access-frg8g\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.407226 kubelet[2706]: I1213 14:17:20.406969 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f50ce617-4989-437b-ac3c-3c48e2f2d3f6-cilium-config-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.407714 kubelet[2706]: I1213 14:17:20.405193 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.407714 kubelet[2706]: I1213 14:17:20.405629 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.412114 kubelet[2706]: I1213 14:17:20.412028 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.412371 kubelet[2706]: I1213 14:17:20.412337 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.412654 kubelet[2706]: I1213 14:17:20.412563 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.412838 kubelet[2706]: I1213 14:17:20.412811 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.413007 kubelet[2706]: I1213 14:17:20.412982 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.413273 kubelet[2706]: I1213 14:17:20.413247 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cni-path" (OuterVolumeSpecName: "cni-path") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.413510 kubelet[2706]: I1213 14:17:20.413448 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.413812 kubelet[2706]: I1213 14:17:20.413734 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-hostproc" (OuterVolumeSpecName: "hostproc") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:20.414085 kubelet[2706]: I1213 14:17:20.414055 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:20.417842 kubelet[2706]: I1213 14:17:20.417709 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:17:20.419784 kubelet[2706]: I1213 14:17:20.419695 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-kube-api-access-ccjg7" (OuterVolumeSpecName: "kube-api-access-ccjg7") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "kube-api-access-ccjg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:20.424613 kubelet[2706]: I1213 14:17:20.424548 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/881100f3-24f0-4af6-9db8-5191e46ad111-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "881100f3-24f0-4af6-9db8-5191e46ad111" (UID: "881100f3-24f0-4af6-9db8-5191e46ad111"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:20.495103 kubelet[2706]: I1213 14:17:20.492266 2706 scope.go:117] "RemoveContainer" containerID="3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f" Dec 13 14:17:20.498243 env[1746]: time="2024-12-13T14:17:20.497776068Z" level=info msg="RemoveContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507234 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-run\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507285 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-config-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507311 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-net\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507334 2706 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ccjg7\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-kube-api-access-ccjg7\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507356 2706 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-bpf-maps\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507405 kubelet[2706]: I1213 14:17:20.507399 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-host-proc-sys-kernel\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507425 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cilium-cgroup\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507445 2706 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-etc-cni-netd\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507464 2706 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-lib-modules\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507484 2706 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-hostproc\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507505 2706 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/881100f3-24f0-4af6-9db8-5191e46ad111-clustermesh-secrets\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507527 2706 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-cni-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507546 2706 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/881100f3-24f0-4af6-9db8-5191e46ad111-hubble-tls\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.507984 kubelet[2706]: I1213 14:17:20.507565 2706 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/881100f3-24f0-4af6-9db8-5191e46ad111-xtables-lock\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:20.510525 env[1746]: time="2024-12-13T14:17:20.510235367Z" level=info msg="RemoveContainer for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" returns successfully" Dec 13 14:17:20.515992 systemd[1]: Removed slice kubepods-besteffort-podf50ce617_4989_437b_ac3c_3c48e2f2d3f6.slice. Dec 13 14:17:20.516967 kubelet[2706]: I1213 14:17:20.516621 2706 scope.go:117] "RemoveContainer" containerID="3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f" Dec 13 14:17:20.517844 env[1746]: time="2024-12-13T14:17:20.517661705Z" level=error msg="ContainerStatus for \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\": not found" Dec 13 14:17:20.518289 kubelet[2706]: E1213 14:17:20.518241 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\": not found" containerID="3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f" Dec 13 14:17:20.518444 kubelet[2706]: I1213 14:17:20.518302 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f"} err="failed to get container status \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ac21a421384192965cfd613ce1926ac626f7541ca3189e87c9d1d52d6c6e42f\": not found" Dec 13 14:17:20.518532 kubelet[2706]: I1213 14:17:20.518453 2706 scope.go:117] "RemoveContainer" containerID="301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0" Dec 13 14:17:20.522784 env[1746]: time="2024-12-13T14:17:20.522698807Z" level=info msg="RemoveContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\"" Dec 13 14:17:20.527360 systemd[1]: Removed slice kubepods-burstable-pod881100f3_24f0_4af6_9db8_5191e46ad111.slice. Dec 13 14:17:20.527597 systemd[1]: kubepods-burstable-pod881100f3_24f0_4af6_9db8_5191e46ad111.slice: Consumed 15.784s CPU time. Dec 13 14:17:20.530216 env[1746]: time="2024-12-13T14:17:20.530147527Z" level=info msg="RemoveContainer for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" returns successfully" Dec 13 14:17:20.532736 kubelet[2706]: I1213 14:17:20.532618 2706 scope.go:117] "RemoveContainer" containerID="6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4" Dec 13 14:17:20.539642 env[1746]: time="2024-12-13T14:17:20.539564856Z" level=info msg="RemoveContainer for \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\"" Dec 13 14:17:20.547078 env[1746]: time="2024-12-13T14:17:20.546979312Z" level=info msg="RemoveContainer for \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\" returns successfully" Dec 13 14:17:20.547460 kubelet[2706]: I1213 14:17:20.547363 2706 scope.go:117] "RemoveContainer" containerID="41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198" Dec 13 14:17:20.556057 env[1746]: time="2024-12-13T14:17:20.554809006Z" level=info msg="RemoveContainer for \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\"" Dec 13 14:17:20.563310 env[1746]: time="2024-12-13T14:17:20.563252406Z" level=info msg="RemoveContainer for \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\" returns successfully" Dec 13 14:17:20.563909 kubelet[2706]: I1213 14:17:20.563873 2706 scope.go:117] "RemoveContainer" containerID="52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace" Dec 13 14:17:20.566005 env[1746]: time="2024-12-13T14:17:20.565952850Z" level=info msg="RemoveContainer for \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\"" Dec 13 14:17:20.577537 env[1746]: time="2024-12-13T14:17:20.577479277Z" level=info msg="RemoveContainer for \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\" returns successfully" Dec 13 14:17:20.578016 kubelet[2706]: I1213 14:17:20.577985 2706 scope.go:117] "RemoveContainer" containerID="873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757" Dec 13 14:17:20.581032 env[1746]: time="2024-12-13T14:17:20.580973678Z" level=info msg="RemoveContainer for \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\"" Dec 13 14:17:20.586677 env[1746]: time="2024-12-13T14:17:20.586599090Z" level=info msg="RemoveContainer for \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\" returns successfully" Dec 13 14:17:20.590962 kubelet[2706]: I1213 14:17:20.590920 2706 scope.go:117] "RemoveContainer" containerID="301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0" Dec 13 14:17:20.592093 env[1746]: time="2024-12-13T14:17:20.591985986Z" level=error msg="ContainerStatus for \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\": not found" Dec 13 14:17:20.592685 kubelet[2706]: E1213 14:17:20.592448 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\": not found" containerID="301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0" Dec 13 14:17:20.592685 kubelet[2706]: I1213 14:17:20.592499 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0"} err="failed to get container status \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\": rpc error: code = NotFound desc = an error occurred when try to find container \"301113eab2d84a4832383b28e9ef8621416034c477315001f2bac58103b94cc0\": not found" Dec 13 14:17:20.592685 kubelet[2706]: I1213 14:17:20.592538 2706 scope.go:117] "RemoveContainer" containerID="6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4" Dec 13 14:17:20.592946 env[1746]: time="2024-12-13T14:17:20.592875005Z" level=error msg="ContainerStatus for \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\": not found" Dec 13 14:17:20.593370 kubelet[2706]: E1213 14:17:20.593146 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\": not found" containerID="6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4" Dec 13 14:17:20.593370 kubelet[2706]: I1213 14:17:20.593190 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4"} err="failed to get container status \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"6aee9cec6c333f11c00337b41af542337347aeb0d8e33718fedc4c70488321b4\": not found" Dec 13 14:17:20.593370 kubelet[2706]: I1213 14:17:20.593223 2706 scope.go:117] "RemoveContainer" containerID="41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198" Dec 13 14:17:20.593619 env[1746]: time="2024-12-13T14:17:20.593522831Z" level=error msg="ContainerStatus for \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\": not found" Dec 13 14:17:20.594003 kubelet[2706]: E1213 14:17:20.593815 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\": not found" containerID="41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198" Dec 13 14:17:20.594003 kubelet[2706]: I1213 14:17:20.593856 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198"} err="failed to get container status \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\": rpc error: code = NotFound desc = an error occurred when try to find container \"41f78e7cb9b5743a0cfa6ed7a8f716d010785117365515491920c6516014f198\": not found" Dec 13 14:17:20.594003 kubelet[2706]: I1213 14:17:20.593886 2706 scope.go:117] "RemoveContainer" containerID="52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace" Dec 13 14:17:20.594239 env[1746]: time="2024-12-13T14:17:20.594161116Z" level=error msg="ContainerStatus for \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\": not found" Dec 13 14:17:20.594649 kubelet[2706]: E1213 14:17:20.594442 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\": not found" containerID="52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace" Dec 13 14:17:20.594649 kubelet[2706]: I1213 14:17:20.594484 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace"} err="failed to get container status \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\": rpc error: code = NotFound desc = an error occurred when try to find container \"52a789867f3cc36a2d17e27adae99f779e0359486ac0132d3125fa96ea7bcace\": not found" Dec 13 14:17:20.594649 kubelet[2706]: I1213 14:17:20.594513 2706 scope.go:117] "RemoveContainer" containerID="873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757" Dec 13 14:17:20.594867 env[1746]: time="2024-12-13T14:17:20.594787519Z" level=error msg="ContainerStatus for \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\": not found" Dec 13 14:17:20.595171 kubelet[2706]: E1213 14:17:20.595067 2706 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\": not found" containerID="873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757" Dec 13 14:17:20.595171 kubelet[2706]: I1213 14:17:20.595107 2706 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757"} err="failed to get container status \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\": rpc error: code = NotFound desc = an error occurred when try to find container \"873ae6b84cff3fe5a6de775afec51c2ce04a7fcbb261052b0fb1c932697b5757\": not found" Dec 13 14:17:20.952816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e-rootfs.mount: Deactivated successfully. Dec 13 14:17:20.952998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e-shm.mount: Deactivated successfully. Dec 13 14:17:20.953130 systemd[1]: var-lib-kubelet-pods-881100f3\x2d24f0\x2d4af6\x2d9db8\x2d5191e46ad111-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dccjg7.mount: Deactivated successfully. Dec 13 14:17:20.953266 systemd[1]: var-lib-kubelet-pods-881100f3\x2d24f0\x2d4af6\x2d9db8\x2d5191e46ad111-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:17:20.953425 systemd[1]: var-lib-kubelet-pods-881100f3\x2d24f0\x2d4af6\x2d9db8\x2d5191e46ad111-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:20.953749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98-rootfs.mount: Deactivated successfully. Dec 13 14:17:20.953948 systemd[1]: var-lib-kubelet-pods-f50ce617\x2d4989\x2d437b\x2dac3c\x2d3c48e2f2d3f6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrg8g.mount: Deactivated successfully. Dec 13 14:17:20.963055 kubelet[2706]: I1213 14:17:20.962992 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" path="/var/lib/kubelet/pods/881100f3-24f0-4af6-9db8-5191e46ad111/volumes" Dec 13 14:17:20.964596 kubelet[2706]: I1213 14:17:20.964548 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f50ce617-4989-437b-ac3c-3c48e2f2d3f6" path="/var/lib/kubelet/pods/f50ce617-4989-437b-ac3c-3c48e2f2d3f6/volumes" Dec 13 14:17:21.878599 sshd[4410]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:21.884092 systemd[1]: sshd@24-172.31.21.141:22-139.178.89.65:35824.service: Deactivated successfully. Dec 13 14:17:21.887471 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:17:21.887786 systemd[1]: session-25.scope: Consumed 1.369s CPU time. Dec 13 14:17:21.888825 systemd-logind[1737]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:17:21.890529 systemd-logind[1737]: Removed session 25. Dec 13 14:17:21.909040 systemd[1]: Started sshd@25-172.31.21.141:22-139.178.89.65:48288.service. Dec 13 14:17:22.087319 sshd[4575]: Accepted publickey for core from 139.178.89.65 port 48288 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:22.089890 sshd[4575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:22.102114 systemd[1]: Started session-26.scope. Dec 13 14:17:22.104024 systemd-logind[1737]: New session 26 of user core. Dec 13 14:17:22.171047 kubelet[2706]: E1213 14:17:22.170979 2706 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:17:23.649028 sshd[4575]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:23.656267 systemd[1]: sshd@25-172.31.21.141:22-139.178.89.65:48288.service: Deactivated successfully. Dec 13 14:17:23.657612 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:17:23.657933 systemd[1]: session-26.scope: Consumed 1.330s CPU time. Dec 13 14:17:23.660149 systemd-logind[1737]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:17:23.661976 systemd-logind[1737]: Removed session 26. Dec 13 14:17:23.684170 kubelet[2706]: E1213 14:17:23.684110 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="apply-sysctl-overwrites" Dec 13 14:17:23.684170 kubelet[2706]: E1213 14:17:23.684160 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="mount-bpf-fs" Dec 13 14:17:23.684825 kubelet[2706]: E1213 14:17:23.684182 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f50ce617-4989-437b-ac3c-3c48e2f2d3f6" containerName="cilium-operator" Dec 13 14:17:23.684825 kubelet[2706]: E1213 14:17:23.684199 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="mount-cgroup" Dec 13 14:17:23.684825 kubelet[2706]: E1213 14:17:23.684215 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="clean-cilium-state" Dec 13 14:17:23.684825 kubelet[2706]: E1213 14:17:23.684229 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="cilium-agent" Dec 13 14:17:23.684825 kubelet[2706]: I1213 14:17:23.684283 2706 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50ce617-4989-437b-ac3c-3c48e2f2d3f6" containerName="cilium-operator" Dec 13 14:17:23.684825 kubelet[2706]: I1213 14:17:23.684300 2706 memory_manager.go:354] "RemoveStaleState removing state" podUID="881100f3-24f0-4af6-9db8-5191e46ad111" containerName="cilium-agent" Dec 13 14:17:23.691038 systemd[1]: Started sshd@26-172.31.21.141:22-139.178.89.65:48296.service. Dec 13 14:17:23.706717 systemd[1]: Created slice kubepods-burstable-pod72a302db_f51a_45f2_8cd9_7595f3b03bdd.slice. Dec 13 14:17:23.726781 kubelet[2706]: I1213 14:17:23.726722 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cni-path\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.726941 kubelet[2706]: I1213 14:17:23.726790 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hubble-tls\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.726941 kubelet[2706]: I1213 14:17:23.726831 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-xtables-lock\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.726941 kubelet[2706]: I1213 14:17:23.726866 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-ipsec-secrets\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.726941 kubelet[2706]: I1213 14:17:23.726908 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-kernel\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.726945 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-cgroup\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.726978 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-lib-modules\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.727014 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-bpf-maps\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.727053 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-etc-cni-netd\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.727089 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-run\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727210 kubelet[2706]: I1213 14:17:23.727129 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-config-path\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727605 kubelet[2706]: I1213 14:17:23.727164 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hostproc\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727605 kubelet[2706]: I1213 14:17:23.727198 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-clustermesh-secrets\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727605 kubelet[2706]: I1213 14:17:23.727232 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-net\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.727605 kubelet[2706]: I1213 14:17:23.727269 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xsv\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-kube-api-access-x7xsv\") pod \"cilium-gg4cw\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " pod="kube-system/cilium-gg4cw" Dec 13 14:17:23.912644 sshd[4589]: Accepted publickey for core from 139.178.89.65 port 48296 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:23.914317 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:23.925768 systemd[1]: Started session-27.scope. Dec 13 14:17:23.928834 systemd-logind[1737]: New session 27 of user core. Dec 13 14:17:24.015113 env[1746]: time="2024-12-13T14:17:24.015045236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gg4cw,Uid:72a302db-f51a-45f2-8cd9-7595f3b03bdd,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:24.063894 env[1746]: time="2024-12-13T14:17:24.063772368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:24.064163 env[1746]: time="2024-12-13T14:17:24.064100549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:24.064485 env[1746]: time="2024-12-13T14:17:24.064432917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:24.065014 env[1746]: time="2024-12-13T14:17:24.064940099Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462 pid=4610 runtime=io.containerd.runc.v2 Dec 13 14:17:24.094943 systemd[1]: Started cri-containerd-19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462.scope. Dec 13 14:17:24.165646 env[1746]: time="2024-12-13T14:17:24.164678664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gg4cw,Uid:72a302db-f51a-45f2-8cd9-7595f3b03bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\"" Dec 13 14:17:24.172975 env[1746]: time="2024-12-13T14:17:24.172915931Z" level=info msg="CreateContainer within sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:17:24.203927 env[1746]: time="2024-12-13T14:17:24.203865843Z" level=info msg="CreateContainer within sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\"" Dec 13 14:17:24.205461 env[1746]: time="2024-12-13T14:17:24.205410143Z" level=info msg="StartContainer for \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\"" Dec 13 14:17:24.243412 systemd[1]: Started cri-containerd-6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608.scope. Dec 13 14:17:24.293801 sshd[4589]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:24.298953 systemd-logind[1737]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:17:24.300222 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:17:24.301528 systemd[1]: sshd@26-172.31.21.141:22-139.178.89.65:48296.service: Deactivated successfully. Dec 13 14:17:24.307365 systemd-logind[1737]: Removed session 27. Dec 13 14:17:24.331240 systemd[1]: Started sshd@27-172.31.21.141:22-139.178.89.65:48308.service. Dec 13 14:17:24.351691 systemd[1]: cri-containerd-6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608.scope: Deactivated successfully. Dec 13 14:17:24.392925 env[1746]: time="2024-12-13T14:17:24.392852742Z" level=info msg="shim disconnected" id=6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608 Dec 13 14:17:24.393571 env[1746]: time="2024-12-13T14:17:24.393526168Z" level=warning msg="cleaning up after shim disconnected" id=6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608 namespace=k8s.io Dec 13 14:17:24.393785 env[1746]: time="2024-12-13T14:17:24.393753224Z" level=info msg="cleaning up dead shim" Dec 13 14:17:24.423719 env[1746]: time="2024-12-13T14:17:24.422572035Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4673 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:17:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:17:24.424587 env[1746]: time="2024-12-13T14:17:24.424431122Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Dec 13 14:17:24.432568 env[1746]: time="2024-12-13T14:17:24.425050338Z" level=error msg="Failed to pipe stdout of container \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\"" error="reading from a closed fifo" Dec 13 14:17:24.434600 env[1746]: time="2024-12-13T14:17:24.430511414Z" level=error msg="Failed to pipe stderr of container \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\"" error="reading from a closed fifo" Dec 13 14:17:24.437776 env[1746]: time="2024-12-13T14:17:24.437647750Z" level=error msg="StartContainer for \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:17:24.439341 kubelet[2706]: E1213 14:17:24.438393 2706 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608" Dec 13 14:17:24.439341 kubelet[2706]: E1213 14:17:24.438602 2706 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:17:24.439341 kubelet[2706]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:17:24.439341 kubelet[2706]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:17:24.439341 kubelet[2706]: rm /hostbin/cilium-mount Dec 13 14:17:24.439862 kubelet[2706]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7xsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-gg4cw_kube-system(72a302db-f51a-45f2-8cd9-7595f3b03bdd): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:17:24.439862 kubelet[2706]: > logger="UnhandledError" Dec 13 14:17:24.440085 kubelet[2706]: E1213 14:17:24.440042 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-gg4cw" podUID="72a302db-f51a-45f2-8cd9-7595f3b03bdd" Dec 13 14:17:24.525434 env[1746]: time="2024-12-13T14:17:24.525352852Z" level=info msg="StopPodSandbox for \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\"" Dec 13 14:17:24.525639 env[1746]: time="2024-12-13T14:17:24.525468318Z" level=info msg="Container to stop \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:17:24.538875 systemd[1]: cri-containerd-19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462.scope: Deactivated successfully. Dec 13 14:17:24.545470 sshd[4671]: Accepted publickey for core from 139.178.89.65 port 48308 ssh2: RSA SHA256:07jB2DPJgjjhgg3L8Uh349EZ0zHZFrUiRWNbK+Fdo0Q Dec 13 14:17:24.548183 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:17:24.563440 systemd[1]: Started session-28.scope. Dec 13 14:17:24.564233 systemd-logind[1737]: New session 28 of user core. Dec 13 14:17:24.620539 env[1746]: time="2024-12-13T14:17:24.620454338Z" level=info msg="shim disconnected" id=19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462 Dec 13 14:17:24.620815 env[1746]: time="2024-12-13T14:17:24.620545837Z" level=warning msg="cleaning up after shim disconnected" id=19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462 namespace=k8s.io Dec 13 14:17:24.620815 env[1746]: time="2024-12-13T14:17:24.620571304Z" level=info msg="cleaning up dead shim" Dec 13 14:17:24.635423 env[1746]: time="2024-12-13T14:17:24.635312970Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4708 runtime=io.containerd.runc.v2\n" Dec 13 14:17:24.636027 env[1746]: time="2024-12-13T14:17:24.635975271Z" level=info msg="TearDown network for sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" successfully" Dec 13 14:17:24.636155 env[1746]: time="2024-12-13T14:17:24.636028090Z" level=info msg="StopPodSandbox for \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" returns successfully" Dec 13 14:17:24.746724 kubelet[2706]: I1213 14:17:24.746573 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-bpf-maps\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.746696 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.746830 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-config-path\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.746877 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-xtables-lock\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.746942 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-ipsec-secrets\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.747004 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-clustermesh-secrets\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747329 kubelet[2706]: I1213 14:17:24.747040 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-net\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747098 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cni-path\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747131 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-run\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747188 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-etc-cni-netd\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747228 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7xsv\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-kube-api-access-x7xsv\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747286 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-kernel\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.747769 kubelet[2706]: I1213 14:17:24.747318 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-cgroup\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.748140 kubelet[2706]: I1213 14:17:24.747414 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hostproc\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.748140 kubelet[2706]: I1213 14:17:24.747459 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hubble-tls\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.748140 kubelet[2706]: I1213 14:17:24.747517 2706 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-lib-modules\") pod \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\" (UID: \"72a302db-f51a-45f2-8cd9-7595f3b03bdd\") " Dec 13 14:17:24.748140 kubelet[2706]: I1213 14:17:24.747605 2706 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-bpf-maps\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.748140 kubelet[2706]: I1213 14:17:24.747672 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.748548 kubelet[2706]: I1213 14:17:24.748509 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.749671 kubelet[2706]: I1213 14:17:24.749614 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.749817 kubelet[2706]: I1213 14:17:24.749750 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.749817 kubelet[2706]: I1213 14:17:24.749792 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cni-path" (OuterVolumeSpecName: "cni-path") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.750007 kubelet[2706]: I1213 14:17:24.749835 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.750007 kubelet[2706]: I1213 14:17:24.749877 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.750007 kubelet[2706]: I1213 14:17:24.749914 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.750007 kubelet[2706]: I1213 14:17:24.749960 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hostproc" (OuterVolumeSpecName: "hostproc") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:17:24.762362 kubelet[2706]: I1213 14:17:24.762288 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:24.763286 kubelet[2706]: I1213 14:17:24.763232 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:17:24.765979 kubelet[2706]: I1213 14:17:24.765925 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:17:24.770373 kubelet[2706]: I1213 14:17:24.770318 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-kube-api-access-x7xsv" (OuterVolumeSpecName: "kube-api-access-x7xsv") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "kube-api-access-x7xsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:24.773503 kubelet[2706]: I1213 14:17:24.773419 2706 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "72a302db-f51a-45f2-8cd9-7595f3b03bdd" (UID: "72a302db-f51a-45f2-8cd9-7595f3b03bdd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:17:24.847603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462-rootfs.mount: Deactivated successfully. Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847763 2706 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-lib-modules\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847805 2706 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hubble-tls\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847827 2706 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-xtables-lock\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847849 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-config-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847879 2706 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cni-path\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847900 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-ipsec-secrets\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847920 2706 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72a302db-f51a-45f2-8cd9-7595f3b03bdd-clustermesh-secrets\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848131 kubelet[2706]: I1213 14:17:24.847940 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-net\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.847960 2706 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-etc-cni-netd\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.847980 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-run\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.847999 2706 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x7xsv\" (UniqueName: \"kubernetes.io/projected/72a302db-f51a-45f2-8cd9-7595f3b03bdd-kube-api-access-x7xsv\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.848020 2706 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-host-proc-sys-kernel\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.848041 2706 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-cilium-cgroup\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.848674 kubelet[2706]: I1213 14:17:24.848064 2706 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72a302db-f51a-45f2-8cd9-7595f3b03bdd-hostproc\") on node \"ip-172-31-21-141\" DevicePath \"\"" Dec 13 14:17:24.849303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462-shm.mount: Deactivated successfully. Dec 13 14:17:24.849652 systemd[1]: var-lib-kubelet-pods-72a302db\x2df51a\x2d45f2\x2d8cd9\x2d7595f3b03bdd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx7xsv.mount: Deactivated successfully. Dec 13 14:17:24.849962 systemd[1]: var-lib-kubelet-pods-72a302db\x2df51a\x2d45f2\x2d8cd9\x2d7595f3b03bdd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:24.850303 systemd[1]: var-lib-kubelet-pods-72a302db\x2df51a\x2d45f2\x2d8cd9\x2d7595f3b03bdd-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:17:24.850699 systemd[1]: var-lib-kubelet-pods-72a302db\x2df51a\x2d45f2\x2d8cd9\x2d7595f3b03bdd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:17:24.964469 systemd[1]: Removed slice kubepods-burstable-pod72a302db_f51a_45f2_8cd9_7595f3b03bdd.slice. Dec 13 14:17:25.529612 kubelet[2706]: I1213 14:17:25.529575 2706 scope.go:117] "RemoveContainer" containerID="6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608" Dec 13 14:17:25.538216 env[1746]: time="2024-12-13T14:17:25.537688656Z" level=info msg="RemoveContainer for \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\"" Dec 13 14:17:25.543625 env[1746]: time="2024-12-13T14:17:25.543540742Z" level=info msg="RemoveContainer for \"6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608\" returns successfully" Dec 13 14:17:25.603920 kubelet[2706]: E1213 14:17:25.603870 2706 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="72a302db-f51a-45f2-8cd9-7595f3b03bdd" containerName="mount-cgroup" Dec 13 14:17:25.604213 kubelet[2706]: I1213 14:17:25.604187 2706 memory_manager.go:354] "RemoveStaleState removing state" podUID="72a302db-f51a-45f2-8cd9-7595f3b03bdd" containerName="mount-cgroup" Dec 13 14:17:25.615514 systemd[1]: Created slice kubepods-burstable-pod16ef3b71_af43_4644_b36f_35a762a207db.slice. Dec 13 14:17:25.617604 kubelet[2706]: W1213 14:17:25.617548 2706 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-21-141" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-141' and this object Dec 13 14:17:25.617764 kubelet[2706]: E1213 14:17:25.617613 2706 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ip-172-31-21-141\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-141' and this object" logger="UnhandledError" Dec 13 14:17:25.617764 kubelet[2706]: W1213 14:17:25.617707 2706 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-21-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-141' and this object Dec 13 14:17:25.617764 kubelet[2706]: E1213 14:17:25.617736 2706 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ip-172-31-21-141\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-141' and this object" logger="UnhandledError" Dec 13 14:17:25.617982 kubelet[2706]: W1213 14:17:25.617815 2706 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-21-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-141' and this object Dec 13 14:17:25.617982 kubelet[2706]: E1213 14:17:25.617840 2706 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ip-172-31-21-141\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-141' and this object" logger="UnhandledError" Dec 13 14:17:25.617982 kubelet[2706]: W1213 14:17:25.617907 2706 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-21-141" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-21-141' and this object Dec 13 14:17:25.617982 kubelet[2706]: E1213 14:17:25.617932 2706 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ip-172-31-21-141\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-21-141' and this object" logger="UnhandledError" Dec 13 14:17:25.753105 kubelet[2706]: I1213 14:17:25.753056 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16ef3b71-af43-4644-b36f-35a762a207db-cilium-ipsec-secrets\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.753814 kubelet[2706]: I1213 14:17:25.753784 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-xtables-lock\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754021 kubelet[2706]: I1213 14:17:25.753994 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16ef3b71-af43-4644-b36f-35a762a207db-clustermesh-secrets\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754203 kubelet[2706]: I1213 14:17:25.754151 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-bpf-maps\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754359 kubelet[2706]: I1213 14:17:25.754333 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-cilium-run\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754547 kubelet[2706]: I1213 14:17:25.754522 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-cni-path\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754724 kubelet[2706]: I1213 14:17:25.754696 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16ef3b71-af43-4644-b36f-35a762a207db-cilium-config-path\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.754873 kubelet[2706]: I1213 14:17:25.754847 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-host-proc-sys-net\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755027 kubelet[2706]: I1213 14:17:25.755001 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-hostproc\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755181 kubelet[2706]: I1213 14:17:25.755154 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-etc-cni-netd\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755357 kubelet[2706]: I1213 14:17:25.755319 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16ef3b71-af43-4644-b36f-35a762a207db-hubble-tls\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755547 kubelet[2706]: I1213 14:17:25.755520 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bktq\" (UniqueName: \"kubernetes.io/projected/16ef3b71-af43-4644-b36f-35a762a207db-kube-api-access-2bktq\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755692 kubelet[2706]: I1213 14:17:25.755668 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-cilium-cgroup\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.755857 kubelet[2706]: I1213 14:17:25.755819 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-lib-modules\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:25.756000 kubelet[2706]: I1213 14:17:25.755974 2706 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16ef3b71-af43-4644-b36f-35a762a207db-host-proc-sys-kernel\") pod \"cilium-hskf5\" (UID: \"16ef3b71-af43-4644-b36f-35a762a207db\") " pod="kube-system/cilium-hskf5" Dec 13 14:17:26.858330 kubelet[2706]: E1213 14:17:26.858268 2706 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:17:26.858936 kubelet[2706]: E1213 14:17:26.858417 2706 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16ef3b71-af43-4644-b36f-35a762a207db-cilium-config-path podName:16ef3b71-af43-4644-b36f-35a762a207db nodeName:}" failed. No retries permitted until 2024-12-13 14:17:27.358362221 +0000 UTC m=+160.641682954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/16ef3b71-af43-4644-b36f-35a762a207db-cilium-config-path") pod "cilium-hskf5" (UID: "16ef3b71-af43-4644-b36f-35a762a207db") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:17:26.956422 kubelet[2706]: I1213 14:17:26.956349 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72a302db-f51a-45f2-8cd9-7595f3b03bdd" path="/var/lib/kubelet/pods/72a302db-f51a-45f2-8cd9-7595f3b03bdd/volumes" Dec 13 14:17:27.172408 kubelet[2706]: E1213 14:17:27.172320 2706 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:17:27.426667 env[1746]: time="2024-12-13T14:17:27.426518501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hskf5,Uid:16ef3b71-af43-4644-b36f-35a762a207db,Namespace:kube-system,Attempt:0,}" Dec 13 14:17:27.458234 env[1746]: time="2024-12-13T14:17:27.458097270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:17:27.458234 env[1746]: time="2024-12-13T14:17:27.458177236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:17:27.458628 env[1746]: time="2024-12-13T14:17:27.458205020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:17:27.459084 env[1746]: time="2024-12-13T14:17:27.459004306Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4 pid=4744 runtime=io.containerd.runc.v2 Dec 13 14:17:27.501161 kubelet[2706]: W1213 14:17:27.501077 2706 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72a302db_f51a_45f2_8cd9_7595f3b03bdd.slice/cri-containerd-6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608.scope WatchSource:0}: container "6f01fd532795e5bfc1c03dc2e8555dcddf7150080a19badbb7ef4c1efadc9608" in namespace "k8s.io": not found Dec 13 14:17:27.502073 systemd[1]: run-containerd-runc-k8s.io-e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4-runc.TRL8Cf.mount: Deactivated successfully. Dec 13 14:17:27.508280 systemd[1]: Started cri-containerd-e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4.scope. Dec 13 14:17:27.555277 env[1746]: time="2024-12-13T14:17:27.555209140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hskf5,Uid:16ef3b71-af43-4644-b36f-35a762a207db,Namespace:kube-system,Attempt:0,} returns sandbox id \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\"" Dec 13 14:17:27.560928 env[1746]: time="2024-12-13T14:17:27.560709096Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:17:27.588719 env[1746]: time="2024-12-13T14:17:27.588629225Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25\"" Dec 13 14:17:27.589587 env[1746]: time="2024-12-13T14:17:27.589536286Z" level=info msg="StartContainer for \"9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25\"" Dec 13 14:17:27.619768 systemd[1]: Started cri-containerd-9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25.scope. Dec 13 14:17:27.680745 env[1746]: time="2024-12-13T14:17:27.679571061Z" level=info msg="StartContainer for \"9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25\" returns successfully" Dec 13 14:17:27.696323 systemd[1]: cri-containerd-9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25.scope: Deactivated successfully. Dec 13 14:17:27.755424 env[1746]: time="2024-12-13T14:17:27.755336806Z" level=info msg="shim disconnected" id=9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25 Dec 13 14:17:27.755761 env[1746]: time="2024-12-13T14:17:27.755726854Z" level=warning msg="cleaning up after shim disconnected" id=9eb099e540a99400001aae6756df99313caa18c4849636f925bffb7e8c60cf25 namespace=k8s.io Dec 13 14:17:27.755889 env[1746]: time="2024-12-13T14:17:27.755861271Z" level=info msg="cleaning up dead shim" Dec 13 14:17:27.776031 env[1746]: time="2024-12-13T14:17:27.775930139Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4827 runtime=io.containerd.runc.v2\n" Dec 13 14:17:28.442669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768881103.mount: Deactivated successfully. Dec 13 14:17:28.545473 env[1746]: time="2024-12-13T14:17:28.544641835Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:17:28.569924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3769006640.mount: Deactivated successfully. Dec 13 14:17:28.586940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2429217549.mount: Deactivated successfully. Dec 13 14:17:28.590878 env[1746]: time="2024-12-13T14:17:28.590770777Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16\"" Dec 13 14:17:28.592444 env[1746]: time="2024-12-13T14:17:28.592103146Z" level=info msg="StartContainer for \"95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16\"" Dec 13 14:17:28.625832 systemd[1]: Started cri-containerd-95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16.scope. Dec 13 14:17:28.703720 env[1746]: time="2024-12-13T14:17:28.703582288Z" level=info msg="StartContainer for \"95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16\" returns successfully" Dec 13 14:17:28.722765 systemd[1]: cri-containerd-95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16.scope: Deactivated successfully. Dec 13 14:17:28.774166 env[1746]: time="2024-12-13T14:17:28.774096993Z" level=info msg="shim disconnected" id=95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16 Dec 13 14:17:28.774166 env[1746]: time="2024-12-13T14:17:28.774168438Z" level=warning msg="cleaning up after shim disconnected" id=95bdaa6c99419115c3dabd5f32c654be7ffaaaf492a27e397d0f388eeb9c5a16 namespace=k8s.io Dec 13 14:17:28.774644 env[1746]: time="2024-12-13T14:17:28.774191901Z" level=info msg="cleaning up dead shim" Dec 13 14:17:28.789656 env[1746]: time="2024-12-13T14:17:28.789575648Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4889 runtime=io.containerd.runc.v2\n" Dec 13 14:17:29.551670 env[1746]: time="2024-12-13T14:17:29.551559727Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:17:29.593118 env[1746]: time="2024-12-13T14:17:29.592927203Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c\"" Dec 13 14:17:29.596334 env[1746]: time="2024-12-13T14:17:29.593746521Z" level=info msg="StartContainer for \"248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c\"" Dec 13 14:17:29.643026 systemd[1]: Started cri-containerd-248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c.scope. Dec 13 14:17:29.706113 env[1746]: time="2024-12-13T14:17:29.706050367Z" level=info msg="StartContainer for \"248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c\" returns successfully" Dec 13 14:17:29.710058 systemd[1]: cri-containerd-248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c.scope: Deactivated successfully. Dec 13 14:17:29.775084 env[1746]: time="2024-12-13T14:17:29.775015719Z" level=info msg="shim disconnected" id=248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c Dec 13 14:17:29.775500 env[1746]: time="2024-12-13T14:17:29.775464250Z" level=warning msg="cleaning up after shim disconnected" id=248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c namespace=k8s.io Dec 13 14:17:29.775662 env[1746]: time="2024-12-13T14:17:29.775633039Z" level=info msg="cleaning up dead shim" Dec 13 14:17:29.789321 env[1746]: time="2024-12-13T14:17:29.789262450Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4950 runtime=io.containerd.runc.v2\n" Dec 13 14:17:30.442817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-248091940f2ecba2f56eb58661d44f31c2dc2a08e9b20f1888b472c418d0040c-rootfs.mount: Deactivated successfully. Dec 13 14:17:30.558744 env[1746]: time="2024-12-13T14:17:30.558669713Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:17:30.589881 env[1746]: time="2024-12-13T14:17:30.589794624Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d\"" Dec 13 14:17:30.593660 env[1746]: time="2024-12-13T14:17:30.593608243Z" level=info msg="StartContainer for \"fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d\"" Dec 13 14:17:30.639638 systemd[1]: Started cri-containerd-fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d.scope. Dec 13 14:17:30.697125 systemd[1]: cri-containerd-fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d.scope: Deactivated successfully. Dec 13 14:17:30.702795 env[1746]: time="2024-12-13T14:17:30.702530434Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ef3b71_af43_4644_b36f_35a762a207db.slice/cri-containerd-fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d.scope/memory.events\": no such file or directory" Dec 13 14:17:30.706020 env[1746]: time="2024-12-13T14:17:30.705963510Z" level=info msg="StartContainer for \"fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d\" returns successfully" Dec 13 14:17:30.751776 env[1746]: time="2024-12-13T14:17:30.751713300Z" level=info msg="shim disconnected" id=fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d Dec 13 14:17:30.752249 env[1746]: time="2024-12-13T14:17:30.752206874Z" level=warning msg="cleaning up after shim disconnected" id=fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d namespace=k8s.io Dec 13 14:17:30.752431 env[1746]: time="2024-12-13T14:17:30.752403819Z" level=info msg="cleaning up dead shim" Dec 13 14:17:30.765994 env[1746]: time="2024-12-13T14:17:30.765936815Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5007 runtime=io.containerd.runc.v2\n" Dec 13 14:17:30.902907 kubelet[2706]: I1213 14:17:30.902731 2706 setters.go:600] "Node became not ready" node="ip-172-31-21-141" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:17:30Z","lastTransitionTime":"2024-12-13T14:17:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:17:31.442884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa928b806c08cb2386d3b875eebcd96f82b699b8d40b59e5d44e52af3d7f998d-rootfs.mount: Deactivated successfully. Dec 13 14:17:31.561982 env[1746]: time="2024-12-13T14:17:31.561883480Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:17:31.598466 env[1746]: time="2024-12-13T14:17:31.598357916Z" level=info msg="CreateContainer within sandbox \"e95aadcd3cdc6039fa655932c76b5f755a17beb1cbda2226b8443b9782ebcfb4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0\"" Dec 13 14:17:31.601315 env[1746]: time="2024-12-13T14:17:31.601263083Z" level=info msg="StartContainer for \"fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0\"" Dec 13 14:17:31.645050 systemd[1]: Started cri-containerd-fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0.scope. Dec 13 14:17:31.713341 env[1746]: time="2024-12-13T14:17:31.713188269Z" level=info msg="StartContainer for \"fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0\" returns successfully" Dec 13 14:17:32.488430 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Dec 13 14:17:35.416413 systemd[1]: run-containerd-runc-k8s.io-fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0-runc.nyrVn2.mount: Deactivated successfully. Dec 13 14:17:36.795206 (udev-worker)[5576]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:17:36.802642 systemd-networkd[1462]: lxc_health: Link UP Dec 13 14:17:36.812831 (udev-worker)[5577]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:17:36.839500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:17:36.841238 systemd-networkd[1462]: lxc_health: Gained carrier Dec 13 14:17:37.468665 kubelet[2706]: I1213 14:17:37.468574 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hskf5" podStartSLOduration=12.468528135 podStartE2EDuration="12.468528135s" podCreationTimestamp="2024-12-13 14:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:17:32.620295851 +0000 UTC m=+165.903616632" watchObservedRunningTime="2024-12-13 14:17:37.468528135 +0000 UTC m=+170.751848904" Dec 13 14:17:38.192625 systemd-networkd[1462]: lxc_health: Gained IPv6LL Dec 13 14:17:40.108521 systemd[1]: run-containerd-runc-k8s.io-fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0-runc.YhqT4l.mount: Deactivated successfully. Dec 13 14:17:42.406779 systemd[1]: run-containerd-runc-k8s.io-fa2fb49055a25fa196733a0a888dbead7f7c3329b1b11561f04cf6784c5e22b0-runc.MxD91u.mount: Deactivated successfully. Dec 13 14:17:42.558445 sshd[4671]: pam_unix(sshd:session): session closed for user core Dec 13 14:17:42.563987 systemd[1]: sshd@27-172.31.21.141:22-139.178.89.65:48308.service: Deactivated successfully. Dec 13 14:17:42.565269 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 14:17:42.567931 systemd-logind[1737]: Session 28 logged out. Waiting for processes to exit. Dec 13 14:17:42.570022 systemd-logind[1737]: Removed session 28. Dec 13 14:17:46.941003 env[1746]: time="2024-12-13T14:17:46.940903285Z" level=info msg="StopPodSandbox for \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\"" Dec 13 14:17:46.941644 env[1746]: time="2024-12-13T14:17:46.941130688Z" level=info msg="TearDown network for sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" successfully" Dec 13 14:17:46.941644 env[1746]: time="2024-12-13T14:17:46.941228968Z" level=info msg="StopPodSandbox for \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" returns successfully" Dec 13 14:17:46.944305 env[1746]: time="2024-12-13T14:17:46.942069944Z" level=info msg="RemovePodSandbox for \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\"" Dec 13 14:17:46.944305 env[1746]: time="2024-12-13T14:17:46.942125742Z" level=info msg="Forcibly stopping sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\"" Dec 13 14:17:46.944305 env[1746]: time="2024-12-13T14:17:46.942252023Z" level=info msg="TearDown network for sandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" successfully" Dec 13 14:17:46.949472 env[1746]: time="2024-12-13T14:17:46.949416396Z" level=info msg="RemovePodSandbox \"5a460b8b0675c693746815cb5640e83a258271e80ebc03e358c8df18ce9c947e\" returns successfully" Dec 13 14:17:46.950473 env[1746]: time="2024-12-13T14:17:46.950429625Z" level=info msg="StopPodSandbox for \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\"" Dec 13 14:17:46.950795 env[1746]: time="2024-12-13T14:17:46.950730978Z" level=info msg="TearDown network for sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" successfully" Dec 13 14:17:46.950916 env[1746]: time="2024-12-13T14:17:46.950883951Z" level=info msg="StopPodSandbox for \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" returns successfully" Dec 13 14:17:46.952653 env[1746]: time="2024-12-13T14:17:46.952605159Z" level=info msg="RemovePodSandbox for \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\"" Dec 13 14:17:46.952906 env[1746]: time="2024-12-13T14:17:46.952850078Z" level=info msg="Forcibly stopping sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\"" Dec 13 14:17:46.953148 env[1746]: time="2024-12-13T14:17:46.953097828Z" level=info msg="TearDown network for sandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" successfully" Dec 13 14:17:46.961498 env[1746]: time="2024-12-13T14:17:46.961434930Z" level=info msg="RemovePodSandbox \"f536c8cf7cfedc49bc0a35148a0cf6e35008b67dcbdfcf96b1476a611e7c1e98\" returns successfully" Dec 13 14:17:46.962355 env[1746]: time="2024-12-13T14:17:46.962313638Z" level=info msg="StopPodSandbox for \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\"" Dec 13 14:17:46.962699 env[1746]: time="2024-12-13T14:17:46.962635037Z" level=info msg="TearDown network for sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" successfully" Dec 13 14:17:46.962824 env[1746]: time="2024-12-13T14:17:46.962789510Z" level=info msg="StopPodSandbox for \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" returns successfully" Dec 13 14:17:46.964849 env[1746]: time="2024-12-13T14:17:46.964778851Z" level=info msg="RemovePodSandbox for \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\"" Dec 13 14:17:46.965017 env[1746]: time="2024-12-13T14:17:46.964848625Z" level=info msg="Forcibly stopping sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\"" Dec 13 14:17:46.965017 env[1746]: time="2024-12-13T14:17:46.964982451Z" level=info msg="TearDown network for sandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" successfully" Dec 13 14:17:46.975507 env[1746]: time="2024-12-13T14:17:46.975450375Z" level=info msg="RemovePodSandbox \"19b19e9416e4b3f416a1bf7dd27f4f3ef2b21a3d8adbddbca1607e2b38622462\" returns successfully" Dec 13 14:17:57.086806 systemd[1]: cri-containerd-b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43.scope: Deactivated successfully. Dec 13 14:17:57.087344 systemd[1]: cri-containerd-b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43.scope: Consumed 5.509s CPU time. Dec 13 14:17:57.124953 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43-rootfs.mount: Deactivated successfully. Dec 13 14:17:57.138747 env[1746]: time="2024-12-13T14:17:57.138682397Z" level=info msg="shim disconnected" id=b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43 Dec 13 14:17:57.139515 env[1746]: time="2024-12-13T14:17:57.139474817Z" level=warning msg="cleaning up after shim disconnected" id=b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43 namespace=k8s.io Dec 13 14:17:57.139649 env[1746]: time="2024-12-13T14:17:57.139621008Z" level=info msg="cleaning up dead shim" Dec 13 14:17:57.158506 env[1746]: time="2024-12-13T14:17:57.158447102Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:17:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5693 runtime=io.containerd.runc.v2\n" Dec 13 14:17:57.646862 kubelet[2706]: I1213 14:17:57.646812 2706 scope.go:117] "RemoveContainer" containerID="b03f79f0e8b4fbaaa707b7f1b2cdbf07ca78d6ff0a8779c3b65145db6f1eda43" Dec 13 14:17:57.650135 env[1746]: time="2024-12-13T14:17:57.650081690Z" level=info msg="CreateContainer within sandbox \"45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:17:57.678440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94081941.mount: Deactivated successfully. Dec 13 14:17:57.685262 env[1746]: time="2024-12-13T14:17:57.685197542Z" level=info msg="CreateContainer within sandbox \"45f6a5a721725f6ccd7af6fbde36972c5f60ad21b95680f3dce75af8c7a484a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"896541790928fa1ecd23f9208817e99fdc0dc67b2f6e241ee38076d541f7ef7a\"" Dec 13 14:17:57.686198 env[1746]: time="2024-12-13T14:17:57.686121180Z" level=info msg="StartContainer for \"896541790928fa1ecd23f9208817e99fdc0dc67b2f6e241ee38076d541f7ef7a\"" Dec 13 14:17:57.721695 systemd[1]: Started cri-containerd-896541790928fa1ecd23f9208817e99fdc0dc67b2f6e241ee38076d541f7ef7a.scope. Dec 13 14:17:57.807309 env[1746]: time="2024-12-13T14:17:57.807226156Z" level=info msg="StartContainer for \"896541790928fa1ecd23f9208817e99fdc0dc67b2f6e241ee38076d541f7ef7a\" returns successfully" Dec 13 14:18:00.638582 kubelet[2706]: E1213 14:18:00.638512 2706 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:18:01.521368 systemd[1]: cri-containerd-07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b.scope: Deactivated successfully. Dec 13 14:18:01.521989 systemd[1]: cri-containerd-07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b.scope: Consumed 4.921s CPU time. Dec 13 14:18:01.561266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b-rootfs.mount: Deactivated successfully. Dec 13 14:18:01.587183 env[1746]: time="2024-12-13T14:18:01.587119838Z" level=info msg="shim disconnected" id=07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b Dec 13 14:18:01.587978 env[1746]: time="2024-12-13T14:18:01.587937311Z" level=warning msg="cleaning up after shim disconnected" id=07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b namespace=k8s.io Dec 13 14:18:01.588115 env[1746]: time="2024-12-13T14:18:01.588086469Z" level=info msg="cleaning up dead shim" Dec 13 14:18:01.602446 env[1746]: time="2024-12-13T14:18:01.602358029Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:18:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5755 runtime=io.containerd.runc.v2\n" Dec 13 14:18:01.661210 kubelet[2706]: I1213 14:18:01.661144 2706 scope.go:117] "RemoveContainer" containerID="07784aa187cf0ac17765e4bc54e1618a79b4c9b538efd50600e375aef22d2a5b" Dec 13 14:18:01.664601 env[1746]: time="2024-12-13T14:18:01.664546427Z" level=info msg="CreateContainer within sandbox \"79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:18:01.683918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185015904.mount: Deactivated successfully. Dec 13 14:18:01.703326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105857517.mount: Deactivated successfully. Dec 13 14:18:01.713098 env[1746]: time="2024-12-13T14:18:01.713014603Z" level=info msg="CreateContainer within sandbox \"79974de89c470bd729fbbe645163441c325a79beed72c1a236fa110c179d2f87\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"764a3bd7bc3cec4b24652aa128d9c98f27dc08b7a136317a608846622ba67033\"" Dec 13 14:18:01.713982 env[1746]: time="2024-12-13T14:18:01.713919156Z" level=info msg="StartContainer for \"764a3bd7bc3cec4b24652aa128d9c98f27dc08b7a136317a608846622ba67033\"" Dec 13 14:18:01.746729 systemd[1]: Started cri-containerd-764a3bd7bc3cec4b24652aa128d9c98f27dc08b7a136317a608846622ba67033.scope. Dec 13 14:18:01.828091 env[1746]: time="2024-12-13T14:18:01.827915643Z" level=info msg="StartContainer for \"764a3bd7bc3cec4b24652aa128d9c98f27dc08b7a136317a608846622ba67033\" returns successfully" Dec 13 14:18:10.639695 kubelet[2706]: E1213 14:18:10.639628 2706 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-141?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"