Feb 9 19:14:41.946999 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 19:14:41.947036 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 19:14:41.947058 kernel: efi: EFI v2.70 by EDK II Feb 9 19:14:41.947074 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 19:14:41.947088 kernel: ACPI: Early table checksum verification disabled Feb 9 19:14:41.947101 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 19:14:41.947117 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 19:14:41.947131 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 19:14:41.947145 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 19:14:41.947158 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 19:14:41.947176 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 19:14:41.947190 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 19:14:41.947203 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 19:14:41.947217 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 19:14:41.947233 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 19:14:41.947252 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 19:14:41.947267 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 19:14:41.947281 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 19:14:41.947295 kernel: printk: bootconsole [uart0] enabled Feb 9 19:14:41.947310 kernel: NUMA: Failed to initialise from firmware Feb 9 19:14:41.947325 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:41.947339 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 19:14:41.947353 kernel: Zone ranges: Feb 9 19:14:41.947368 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 19:14:41.947382 kernel: DMA32 empty Feb 9 19:14:41.947396 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 19:14:41.947415 kernel: Movable zone start for each node Feb 9 19:14:41.947429 kernel: Early memory node ranges Feb 9 19:14:41.947444 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 19:14:41.947458 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 19:14:41.947472 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 19:14:41.947486 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 19:14:41.947501 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 19:14:41.947515 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 19:14:41.947529 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 19:14:41.947544 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 19:14:41.947654 kernel: psci: probing for conduit method from ACPI. Feb 9 19:14:41.947672 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 19:14:41.947694 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 19:14:41.947709 kernel: psci: Trusted OS migration not required Feb 9 19:14:41.947731 kernel: psci: SMC Calling Convention v1.1 Feb 9 19:14:41.947747 kernel: ACPI: SRAT not present Feb 9 19:14:41.947763 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 19:14:41.947783 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 19:14:41.947799 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 19:14:41.947815 kernel: Detected PIPT I-cache on CPU0 Feb 9 19:14:41.947830 kernel: CPU features: detected: GIC system register CPU interface Feb 9 19:14:41.947845 kernel: CPU features: detected: Spectre-v2 Feb 9 19:14:41.947861 kernel: CPU features: detected: Spectre-v3a Feb 9 19:14:41.947876 kernel: CPU features: detected: Spectre-BHB Feb 9 19:14:41.947891 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 19:14:41.947907 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 19:14:41.947922 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 19:14:41.947937 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 19:14:41.947957 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 19:14:41.947972 kernel: Policy zone: Normal Feb 9 19:14:41.947990 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:41.948007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:14:41.948022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:14:41.948051 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:14:41.948074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:14:41.948091 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 19:14:41.948108 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 19:14:41.948124 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:14:41.948144 kernel: trace event string verifier disabled Feb 9 19:14:41.948160 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 19:14:41.948176 kernel: rcu: RCU event tracing is enabled. Feb 9 19:14:41.948192 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:14:41.948208 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 19:14:41.948223 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:14:41.948239 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:14:41.948254 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:14:41.948270 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 19:14:41.948285 kernel: GICv3: 96 SPIs implemented Feb 9 19:14:41.948300 kernel: GICv3: 0 Extended SPIs implemented Feb 9 19:14:41.948315 kernel: GICv3: Distributor has no Range Selector support Feb 9 19:14:41.948335 kernel: Root IRQ handler: gic_handle_irq Feb 9 19:14:41.948350 kernel: GICv3: 16 PPIs implemented Feb 9 19:14:41.948365 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 19:14:41.948380 kernel: ACPI: SRAT not present Feb 9 19:14:41.948395 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 19:14:41.948410 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 19:14:41.948426 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 19:14:41.948441 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 19:14:41.948457 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 19:14:41.948472 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 19:14:41.948487 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 19:14:41.948507 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 19:14:41.948522 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 19:14:41.948538 kernel: Console: colour dummy device 80x25 Feb 9 19:14:41.948579 kernel: printk: console [tty1] enabled Feb 9 19:14:41.948597 kernel: ACPI: Core revision 20210730 Feb 9 19:14:41.948613 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 19:14:41.948628 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:14:41.948644 kernel: LSM: Security Framework initializing Feb 9 19:14:41.948660 kernel: SELinux: Initializing. Feb 9 19:14:41.948675 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:41.948697 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:14:41.948713 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:14:41.948728 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 19:14:41.948744 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 19:14:41.948759 kernel: Remapping and enabling EFI services. Feb 9 19:14:41.948775 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:14:41.948791 kernel: Detected PIPT I-cache on CPU1 Feb 9 19:14:41.948806 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 19:14:41.948822 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 19:14:41.948842 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 19:14:41.948858 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:14:41.948873 kernel: SMP: Total of 2 processors activated. Feb 9 19:14:41.948889 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 19:14:41.948904 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 19:14:41.948920 kernel: CPU features: detected: CRC32 instructions Feb 9 19:14:41.948936 kernel: CPU: All CPU(s) started at EL1 Feb 9 19:14:41.948951 kernel: alternatives: patching kernel code Feb 9 19:14:41.948967 kernel: devtmpfs: initialized Feb 9 19:14:41.948988 kernel: KASLR disabled due to lack of seed Feb 9 19:14:41.949004 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:14:41.949020 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:14:41.949046 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:14:41.949066 kernel: SMBIOS 3.0.0 present. Feb 9 19:14:41.949082 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 19:14:41.949098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:14:41.949114 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 19:14:41.949131 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 19:14:41.949147 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 19:14:41.949163 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:14:41.949180 kernel: audit: type=2000 audit(0.249:1): state=initialized audit_enabled=0 res=1 Feb 9 19:14:41.949201 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:14:41.949217 kernel: cpuidle: using governor menu Feb 9 19:14:41.949233 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 19:14:41.949249 kernel: ASID allocator initialised with 32768 entries Feb 9 19:14:41.949265 kernel: ACPI: bus type PCI registered Feb 9 19:14:41.949286 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:14:41.949302 kernel: Serial: AMBA PL011 UART driver Feb 9 19:14:41.949318 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:14:41.949335 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 19:14:41.949351 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:14:41.949367 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 19:14:41.949383 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:14:41.949399 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 19:14:41.949416 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:14:41.949436 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:14:41.949452 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:14:41.949468 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:14:41.949484 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:14:41.949500 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:14:41.949516 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:14:41.949532 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:14:41.949567 kernel: ACPI: Interpreter enabled Feb 9 19:14:41.949588 kernel: ACPI: Using GIC for interrupt routing Feb 9 19:14:41.949609 kernel: ACPI: MCFG table detected, 1 entries Feb 9 19:14:41.949626 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 19:14:41.949924 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:14:41.950128 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 19:14:41.950325 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 19:14:41.950519 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 19:14:41.950740 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 19:14:41.950769 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 19:14:41.950787 kernel: acpiphp: Slot [1] registered Feb 9 19:14:41.950804 kernel: acpiphp: Slot [2] registered Feb 9 19:14:41.950820 kernel: acpiphp: Slot [3] registered Feb 9 19:14:41.950836 kernel: acpiphp: Slot [4] registered Feb 9 19:14:41.950852 kernel: acpiphp: Slot [5] registered Feb 9 19:14:41.950868 kernel: acpiphp: Slot [6] registered Feb 9 19:14:41.950885 kernel: acpiphp: Slot [7] registered Feb 9 19:14:41.954617 kernel: acpiphp: Slot [8] registered Feb 9 19:14:41.954646 kernel: acpiphp: Slot [9] registered Feb 9 19:14:41.954663 kernel: acpiphp: Slot [10] registered Feb 9 19:14:41.954679 kernel: acpiphp: Slot [11] registered Feb 9 19:14:41.954696 kernel: acpiphp: Slot [12] registered Feb 9 19:14:41.954712 kernel: acpiphp: Slot [13] registered Feb 9 19:14:41.954728 kernel: acpiphp: Slot [14] registered Feb 9 19:14:41.954744 kernel: acpiphp: Slot [15] registered Feb 9 19:14:41.954760 kernel: acpiphp: Slot [16] registered Feb 9 19:14:41.954776 kernel: acpiphp: Slot [17] registered Feb 9 19:14:41.954792 kernel: acpiphp: Slot [18] registered Feb 9 19:14:41.954813 kernel: acpiphp: Slot [19] registered Feb 9 19:14:41.954829 kernel: acpiphp: Slot [20] registered Feb 9 19:14:41.954844 kernel: acpiphp: Slot [21] registered Feb 9 19:14:41.954860 kernel: acpiphp: Slot [22] registered Feb 9 19:14:41.954876 kernel: acpiphp: Slot [23] registered Feb 9 19:14:41.954892 kernel: acpiphp: Slot [24] registered Feb 9 19:14:41.954908 kernel: acpiphp: Slot [25] registered Feb 9 19:14:41.954925 kernel: acpiphp: Slot [26] registered Feb 9 19:14:41.954941 kernel: acpiphp: Slot [27] registered Feb 9 19:14:41.954961 kernel: acpiphp: Slot [28] registered Feb 9 19:14:41.954977 kernel: acpiphp: Slot [29] registered Feb 9 19:14:41.954993 kernel: acpiphp: Slot [30] registered Feb 9 19:14:41.955009 kernel: acpiphp: Slot [31] registered Feb 9 19:14:41.955025 kernel: PCI host bridge to bus 0000:00 Feb 9 19:14:41.955261 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 19:14:41.955446 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 19:14:41.955674 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:41.955864 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 19:14:41.956106 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 19:14:41.956332 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 19:14:41.956542 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 19:14:41.956785 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 19:14:41.956991 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 19:14:41.957200 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:41.957416 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 19:14:41.961791 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 19:14:41.962029 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 19:14:41.962246 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 19:14:41.962456 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 19:14:41.962780 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 19:14:41.962998 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 19:14:41.963206 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 19:14:41.963413 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 19:14:41.974972 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 19:14:41.975183 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 19:14:41.975364 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 19:14:41.975544 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 19:14:41.975635 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 19:14:41.975654 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 19:14:41.975671 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 19:14:41.975688 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 19:14:41.975705 kernel: iommu: Default domain type: Translated Feb 9 19:14:41.975721 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 19:14:41.975737 kernel: vgaarb: loaded Feb 9 19:14:41.975753 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:14:41.975770 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:14:41.975790 kernel: PTP clock support registered Feb 9 19:14:41.975807 kernel: Registered efivars operations Feb 9 19:14:41.975823 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 19:14:41.975839 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:14:41.975855 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:14:41.975871 kernel: pnp: PnP ACPI init Feb 9 19:14:41.976116 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 19:14:41.976142 kernel: pnp: PnP ACPI: found 1 devices Feb 9 19:14:41.976160 kernel: NET: Registered PF_INET protocol family Feb 9 19:14:41.976181 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:14:41.976199 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:14:41.976215 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:14:41.976232 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:14:41.976248 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:14:41.976264 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:14:41.976281 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:41.976297 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:14:41.976313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:14:41.976334 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:14:41.976351 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 19:14:41.976367 kernel: kvm [1]: HYP mode not available Feb 9 19:14:41.976383 kernel: Initialise system trusted keyrings Feb 9 19:14:41.976400 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:14:41.976417 kernel: Key type asymmetric registered Feb 9 19:14:41.976433 kernel: Asymmetric key parser 'x509' registered Feb 9 19:14:41.976450 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:14:41.976466 kernel: io scheduler mq-deadline registered Feb 9 19:14:41.976487 kernel: io scheduler kyber registered Feb 9 19:14:41.976503 kernel: io scheduler bfq registered Feb 9 19:14:41.976727 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 19:14:41.976754 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 19:14:41.976771 kernel: ACPI: button: Power Button [PWRB] Feb 9 19:14:41.976787 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:14:41.976805 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 19:14:41.977007 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 19:14:41.977036 kernel: printk: console [ttyS0] disabled Feb 9 19:14:41.977053 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 19:14:41.977070 kernel: printk: console [ttyS0] enabled Feb 9 19:14:41.977086 kernel: printk: bootconsole [uart0] disabled Feb 9 19:14:41.977102 kernel: thunder_xcv, ver 1.0 Feb 9 19:14:41.977118 kernel: thunder_bgx, ver 1.0 Feb 9 19:14:41.977134 kernel: nicpf, ver 1.0 Feb 9 19:14:41.977150 kernel: nicvf, ver 1.0 Feb 9 19:14:41.977362 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 19:14:41.977611 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T19:14:41 UTC (1707506081) Feb 9 19:14:41.977638 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:14:41.977655 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:14:41.977672 kernel: Segment Routing with IPv6 Feb 9 19:14:41.977688 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:14:41.977705 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:14:41.977721 kernel: Key type dns_resolver registered Feb 9 19:14:41.977737 kernel: registered taskstats version 1 Feb 9 19:14:41.977759 kernel: Loading compiled-in X.509 certificates Feb 9 19:14:41.977777 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 19:14:41.977793 kernel: Key type .fscrypt registered Feb 9 19:14:41.977809 kernel: Key type fscrypt-provisioning registered Feb 9 19:14:41.977824 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:14:41.977841 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:14:41.977857 kernel: ima: No architecture policies found Feb 9 19:14:41.977873 kernel: Freeing unused kernel memory: 34688K Feb 9 19:14:41.977890 kernel: Run /init as init process Feb 9 19:14:41.977910 kernel: with arguments: Feb 9 19:14:41.977926 kernel: /init Feb 9 19:14:41.977942 kernel: with environment: Feb 9 19:14:41.977958 kernel: HOME=/ Feb 9 19:14:41.977974 kernel: TERM=linux Feb 9 19:14:41.977990 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:14:41.978011 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:41.978032 systemd[1]: Detected virtualization amazon. Feb 9 19:14:41.978054 systemd[1]: Detected architecture arm64. Feb 9 19:14:41.978072 systemd[1]: Running in initrd. Feb 9 19:14:41.978090 systemd[1]: No hostname configured, using default hostname. Feb 9 19:14:41.978107 systemd[1]: Hostname set to . Feb 9 19:14:41.978125 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:41.978143 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:14:41.978160 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:41.978177 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:41.978199 systemd[1]: Reached target paths.target. Feb 9 19:14:41.978216 systemd[1]: Reached target slices.target. Feb 9 19:14:41.978234 systemd[1]: Reached target swap.target. Feb 9 19:14:41.978251 systemd[1]: Reached target timers.target. Feb 9 19:14:41.978269 systemd[1]: Listening on iscsid.socket. Feb 9 19:14:41.978287 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:14:41.978304 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:14:41.978322 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:14:41.978344 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:14:41.978361 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:41.978379 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:41.978396 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:41.978414 systemd[1]: Reached target sockets.target. Feb 9 19:14:41.978432 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:41.978449 systemd[1]: Finished network-cleanup.service. Feb 9 19:14:41.978467 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:14:41.978485 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:41.978507 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:41.978524 systemd[1]: Starting systemd-resolved.service... Feb 9 19:14:41.978542 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:14:41.978585 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:41.978604 kernel: audit: type=1130 audit(1707506081.966:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.978648 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:14:41.978670 systemd-journald[308]: Journal started Feb 9 19:14:41.978766 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2e76cedf484bb0260609000e6fb769) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:41.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:41.951380 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 19:14:41.994574 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:14:41.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.011539 kernel: Bridge firewalling registered Feb 9 19:14:42.011591 systemd[1]: Started systemd-journald.service. Feb 9 19:14:42.011620 kernel: audit: type=1130 audit(1707506081.997:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.012763 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:14:42.016629 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 19:14:42.031722 kernel: audit: type=1130 audit(1707506082.010:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.033203 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:14:42.041762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:14:42.061731 kernel: audit: type=1130 audit(1707506082.019:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.069013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:14:42.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.086988 kernel: audit: type=1130 audit(1707506082.071:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.089584 kernel: SCSI subsystem initialized Feb 9 19:14:42.098253 systemd-resolved[310]: Positive Trust Anchors: Feb 9 19:14:42.098275 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:14:42.098328 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:14:42.139115 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:14:42.148020 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:14:42.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.166583 kernel: audit: type=1130 audit(1707506082.145:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.166647 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:14:42.170567 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:14:42.174539 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:14:42.179853 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 19:14:42.181193 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:42.192963 dracut-cmdline[325]: dracut-dracut-053 Feb 9 19:14:42.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.199666 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:42.212085 kernel: audit: type=1130 audit(1707506082.188:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.214099 dracut-cmdline[325]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 19:14:42.245003 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:42.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.256579 kernel: audit: type=1130 audit(1707506082.246:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.375574 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:14:42.386581 kernel: iscsi: registered transport (tcp) Feb 9 19:14:42.409581 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:14:42.409652 kernel: QLogic iSCSI HBA Driver Feb 9 19:14:42.589112 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 19:14:42.591832 kernel: random: crng init done Feb 9 19:14:42.592414 systemd[1]: Started systemd-resolved.service. Feb 9 19:14:42.605177 kernel: audit: type=1130 audit(1707506082.593:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.594834 systemd[1]: Reached target nss-lookup.target. Feb 9 19:14:42.623335 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:14:42.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:42.627426 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:14:42.691599 kernel: raid6: neonx8 gen() 6433 MB/s Feb 9 19:14:42.709580 kernel: raid6: neonx8 xor() 4708 MB/s Feb 9 19:14:42.727582 kernel: raid6: neonx4 gen() 6635 MB/s Feb 9 19:14:42.745577 kernel: raid6: neonx4 xor() 4887 MB/s Feb 9 19:14:42.763577 kernel: raid6: neonx2 gen() 5819 MB/s Feb 9 19:14:42.781578 kernel: raid6: neonx2 xor() 4505 MB/s Feb 9 19:14:42.799577 kernel: raid6: neonx1 gen() 4513 MB/s Feb 9 19:14:42.817583 kernel: raid6: neonx1 xor() 3664 MB/s Feb 9 19:14:42.835577 kernel: raid6: int64x8 gen() 3439 MB/s Feb 9 19:14:42.853578 kernel: raid6: int64x8 xor() 2081 MB/s Feb 9 19:14:42.871577 kernel: raid6: int64x4 gen() 3858 MB/s Feb 9 19:14:42.889578 kernel: raid6: int64x4 xor() 2189 MB/s Feb 9 19:14:42.907577 kernel: raid6: int64x2 gen() 3624 MB/s Feb 9 19:14:42.925582 kernel: raid6: int64x2 xor() 1944 MB/s Feb 9 19:14:42.943577 kernel: raid6: int64x1 gen() 2778 MB/s Feb 9 19:14:42.963078 kernel: raid6: int64x1 xor() 1446 MB/s Feb 9 19:14:42.963109 kernel: raid6: using algorithm neonx4 gen() 6635 MB/s Feb 9 19:14:42.963132 kernel: raid6: .... xor() 4887 MB/s, rmw enabled Feb 9 19:14:42.964908 kernel: raid6: using neon recovery algorithm Feb 9 19:14:42.983584 kernel: xor: measuring software checksum speed Feb 9 19:14:42.986579 kernel: 8regs : 9289 MB/sec Feb 9 19:14:42.986608 kernel: 32regs : 11107 MB/sec Feb 9 19:14:42.992618 kernel: arm64_neon : 9614 MB/sec Feb 9 19:14:42.992651 kernel: xor: using function: 32regs (11107 MB/sec) Feb 9 19:14:43.082593 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 19:14:43.099918 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:14:43.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.100000 audit: BPF prog-id=7 op=LOAD Feb 9 19:14:43.100000 audit: BPF prog-id=8 op=LOAD Feb 9 19:14:43.104104 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:43.132161 systemd-udevd[509]: Using default interface naming scheme 'v252'. Feb 9 19:14:43.143179 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:43.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.146323 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:14:43.178272 dracut-pre-trigger[514]: rd.md=0: removing MD RAID activation Feb 9 19:14:43.238279 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:14:43.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.242781 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:43.346022 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:43.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:43.457584 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 19:14:43.457651 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 19:14:43.476008 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 19:14:43.476333 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 19:14:43.482573 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 19:14:43.488212 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:64:9e:26:8b:f3 Feb 9 19:14:43.488491 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 19:14:43.490817 (udev-worker)[574]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:43.498616 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 19:14:43.505091 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:14:43.505158 kernel: GPT:9289727 != 16777215 Feb 9 19:14:43.507467 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:14:43.510705 kernel: GPT:9289727 != 16777215 Feb 9 19:14:43.510757 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:14:43.512310 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:43.584590 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (573) Feb 9 19:14:43.603435 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:14:43.679471 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:43.695303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:14:43.696013 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:14:43.718644 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:14:43.731681 systemd[1]: Starting disk-uuid.service... Feb 9 19:14:43.742996 disk-uuid[672]: Primary Header is updated. Feb 9 19:14:43.742996 disk-uuid[672]: Secondary Entries is updated. Feb 9 19:14:43.742996 disk-uuid[672]: Secondary Header is updated. Feb 9 19:14:43.751584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:43.760584 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:43.769587 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:44.767590 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 19:14:44.768475 disk-uuid[673]: The operation has completed successfully. Feb 9 19:14:44.926053 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:14:44.926635 systemd[1]: Finished disk-uuid.service. Feb 9 19:14:44.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:44.951029 systemd[1]: Starting verity-setup.service... Feb 9 19:14:44.982812 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 19:14:45.073654 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:14:45.079486 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:14:45.085314 systemd[1]: Finished verity-setup.service. Feb 9 19:14:45.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.185813 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:14:45.184406 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:14:45.186226 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:14:45.187480 systemd[1]: Starting ignition-setup.service... Feb 9 19:14:45.191811 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:14:45.224917 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:45.224994 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:45.227375 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:45.239579 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:45.259825 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:14:45.295944 systemd[1]: Finished ignition-setup.service. Feb 9 19:14:45.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.300007 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:14:45.355534 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:14:45.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.370000 audit: BPF prog-id=9 op=LOAD Feb 9 19:14:45.373308 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:45.421380 systemd-networkd[1185]: lo: Link UP Feb 9 19:14:45.421402 systemd-networkd[1185]: lo: Gained carrier Feb 9 19:14:45.425858 systemd-networkd[1185]: Enumeration completed Feb 9 19:14:45.427371 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:45.428644 systemd-networkd[1185]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:45.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.434680 systemd[1]: Reached target network.target. Feb 9 19:14:45.435354 systemd-networkd[1185]: eth0: Link UP Feb 9 19:14:45.435362 systemd-networkd[1185]: eth0: Gained carrier Feb 9 19:14:45.442619 systemd[1]: Starting iscsiuio.service... Feb 9 19:14:45.454217 systemd[1]: Started iscsiuio.service. Feb 9 19:14:45.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.458746 systemd-networkd[1185]: eth0: DHCPv4 address 172.31.28.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:45.461009 systemd[1]: Starting iscsid.service... Feb 9 19:14:45.470451 iscsid[1190]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:45.470451 iscsid[1190]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:14:45.470451 iscsid[1190]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:14:45.470451 iscsid[1190]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:14:45.470451 iscsid[1190]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:14:45.489808 iscsid[1190]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:14:45.495226 systemd[1]: Started iscsid.service. Feb 9 19:14:45.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.498971 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:14:45.521887 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:14:45.525498 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:14:45.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.527269 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:45.530294 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:45.536414 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:14:45.554928 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:14:45.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.803105 ignition[1141]: Ignition 2.14.0 Feb 9 19:14:45.804793 ignition[1141]: Stage: fetch-offline Feb 9 19:14:45.806354 ignition[1141]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:45.806455 ignition[1141]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:45.823842 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:45.824721 ignition[1141]: Ignition finished successfully Feb 9 19:14:45.829635 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:14:45.852712 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:14:45.852751 kernel: audit: type=1130 audit(1707506085.828:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.833595 systemd[1]: Starting ignition-fetch.service... Feb 9 19:14:45.857578 ignition[1209]: Ignition 2.14.0 Feb 9 19:14:45.857606 ignition[1209]: Stage: fetch Feb 9 19:14:45.857885 ignition[1209]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:45.857937 ignition[1209]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:45.872613 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:45.875929 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:45.881066 ignition[1209]: INFO : PUT result: OK Feb 9 19:14:45.884058 ignition[1209]: DEBUG : parsed url from cmdline: "" Feb 9 19:14:45.884058 ignition[1209]: INFO : no config URL provided Feb 9 19:14:45.884058 ignition[1209]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:14:45.889818 ignition[1209]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 19:14:45.889818 ignition[1209]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:45.894112 ignition[1209]: INFO : PUT result: OK Feb 9 19:14:45.894112 ignition[1209]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 19:14:45.898439 ignition[1209]: INFO : GET result: OK Feb 9 19:14:45.899950 ignition[1209]: DEBUG : parsing config with SHA512: 3a498153253df4d91be47813b900d601d96047927d9fd8e9b84eb0b1c891ffbd80f3d3c97157ce460ec59a01ee4dc3716e3ea56f97da9c9ff6afbca605e0ce8e Feb 9 19:14:45.961296 unknown[1209]: fetched base config from "system" Feb 9 19:14:45.961541 unknown[1209]: fetched base config from "system" Feb 9 19:14:45.963085 ignition[1209]: fetch: fetch complete Feb 9 19:14:45.961580 unknown[1209]: fetched user config from "aws" Feb 9 19:14:45.963098 ignition[1209]: fetch: fetch passed Feb 9 19:14:45.963207 ignition[1209]: Ignition finished successfully Feb 9 19:14:45.973344 systemd[1]: Finished ignition-fetch.service. Feb 9 19:14:45.985701 kernel: audit: type=1130 audit(1707506085.973:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:45.976691 systemd[1]: Starting ignition-kargs.service... Feb 9 19:14:45.999316 ignition[1215]: Ignition 2.14.0 Feb 9 19:14:45.999341 ignition[1215]: Stage: kargs Feb 9 19:14:45.999661 ignition[1215]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:45.999720 ignition[1215]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.012825 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:46.015089 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:46.018105 ignition[1215]: INFO : PUT result: OK Feb 9 19:14:46.023873 ignition[1215]: kargs: kargs passed Feb 9 19:14:46.023979 ignition[1215]: Ignition finished successfully Feb 9 19:14:46.027754 systemd[1]: Finished ignition-kargs.service. Feb 9 19:14:46.032029 systemd[1]: Starting ignition-disks.service... Feb 9 19:14:46.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.053329 ignition[1221]: Ignition 2.14.0 Feb 9 19:14:46.053493 ignition[1221]: Stage: disks Feb 9 19:14:46.054023 ignition[1221]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:46.072285 kernel: audit: type=1130 audit(1707506086.026:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.072397 ignition[1221]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:46.054106 ignition[1221]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.071679 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:46.077175 ignition[1221]: INFO : PUT result: OK Feb 9 19:14:46.083663 ignition[1221]: disks: disks passed Feb 9 19:14:46.083982 ignition[1221]: Ignition finished successfully Feb 9 19:14:46.088137 systemd[1]: Finished ignition-disks.service. Feb 9 19:14:46.103103 kernel: audit: type=1130 audit(1707506086.093:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.102426 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:14:46.104857 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:46.120356 systemd[1]: Reached target local-fs.target. Feb 9 19:14:46.136485 systemd[1]: Reached target sysinit.target. Feb 9 19:14:46.139865 systemd[1]: Reached target basic.target. Feb 9 19:14:46.143870 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:14:46.175730 systemd-fsck[1229]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 19:14:46.192613 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:14:46.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.203873 systemd[1]: Mounting sysroot.mount... Feb 9 19:14:46.209659 kernel: audit: type=1130 audit(1707506086.193:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.224586 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:14:46.225620 systemd[1]: Mounted sysroot.mount. Feb 9 19:14:46.226419 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:14:46.239188 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:14:46.241644 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:14:46.241932 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:14:46.242004 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:14:46.252807 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:14:46.268559 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:46.274004 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:14:46.291606 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1246) Feb 9 19:14:46.294755 initrd-setup-root[1251]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:14:46.303110 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:46.303204 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:46.303230 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:46.312620 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:46.316544 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:46.319684 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:14:46.327778 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:14:46.336581 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:14:46.491852 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:14:46.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.496322 systemd[1]: Starting ignition-mount.service... Feb 9 19:14:46.506413 systemd[1]: Starting sysroot-boot.service... Feb 9 19:14:46.510598 kernel: audit: type=1130 audit(1707506086.493:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.520285 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:46.520459 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:14:46.548148 systemd[1]: Finished sysroot-boot.service. Feb 9 19:14:46.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.563587 kernel: audit: type=1130 audit(1707506086.550:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.565724 ignition[1312]: INFO : Ignition 2.14.0 Feb 9 19:14:46.569343 ignition[1312]: INFO : Stage: mount Feb 9 19:14:46.569343 ignition[1312]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:46.569343 ignition[1312]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.584936 ignition[1312]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:46.587747 ignition[1312]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:46.590530 ignition[1312]: INFO : PUT result: OK Feb 9 19:14:46.595954 ignition[1312]: INFO : mount: mount passed Feb 9 19:14:46.597607 ignition[1312]: INFO : Ignition finished successfully Feb 9 19:14:46.600375 systemd[1]: Finished ignition-mount.service. Feb 9 19:14:46.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.611875 systemd[1]: Starting ignition-files.service... Feb 9 19:14:46.619586 kernel: audit: type=1130 audit(1707506086.601:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:46.625917 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:14:46.647207 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1321) Feb 9 19:14:46.647266 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 19:14:46.649527 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 19:14:46.651656 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 19:14:46.658600 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 19:14:46.662678 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:14:46.693323 ignition[1340]: INFO : Ignition 2.14.0 Feb 9 19:14:46.693323 ignition[1340]: INFO : Stage: files Feb 9 19:14:46.696663 ignition[1340]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:46.696663 ignition[1340]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:46.713440 ignition[1340]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:46.715936 ignition[1340]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:46.719225 ignition[1340]: INFO : PUT result: OK Feb 9 19:14:46.724448 ignition[1340]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:14:46.728453 ignition[1340]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:14:46.728453 ignition[1340]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:14:46.753492 ignition[1340]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:14:46.756480 ignition[1340]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:14:46.759814 unknown[1340]: wrote ssh authorized keys file for user: core Feb 9 19:14:46.762757 ignition[1340]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:14:46.765380 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:46.765380 ignition[1340]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 19:14:47.241036 ignition[1340]: INFO : GET result: OK Feb 9 19:14:47.272714 systemd-networkd[1185]: eth0: Gained IPv6LL Feb 9 19:14:47.653428 ignition[1340]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 19:14:47.658010 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 19:14:47.658010 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:47.658010 ignition[1340]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:47.696293 ignition[1340]: INFO : GET result: OK Feb 9 19:14:47.804437 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 19:14:47.808869 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:47.808869 ignition[1340]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:48.164179 ignition[1340]: INFO : GET result: OK Feb 9 19:14:48.409673 ignition[1340]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 19:14:48.414212 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 19:14:48.418145 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:48.421407 ignition[1340]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 19:14:48.540369 ignition[1340]: INFO : GET result: OK Feb 9 19:14:49.192709 ignition[1340]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 19:14:49.197596 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:14:49.200768 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:49.204435 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:14:49.207907 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:49.211318 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:49.221725 ignition[1340]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2078047167" Feb 9 19:14:49.228510 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1340) Feb 9 19:14:49.228569 ignition[1340]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2078047167": device or resource busy Feb 9 19:14:49.228569 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2078047167", trying btrfs: device or resource busy Feb 9 19:14:49.228569 ignition[1340]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2078047167" Feb 9 19:14:49.228569 ignition[1340]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2078047167" Feb 9 19:14:49.257638 ignition[1340]: INFO : op(3): [started] unmounting "/mnt/oem2078047167" Feb 9 19:14:49.260326 ignition[1340]: INFO : op(3): [finished] unmounting "/mnt/oem2078047167" Feb 9 19:14:49.262045 systemd[1]: mnt-oem2078047167.mount: Deactivated successfully. Feb 9 19:14:49.265144 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 19:14:49.265144 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:49.265144 ignition[1340]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 19:14:49.331867 ignition[1340]: INFO : GET result: OK Feb 9 19:14:49.941434 ignition[1340]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 19:14:49.946041 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:14:49.946041 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:49.946041 ignition[1340]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 19:14:50.014534 ignition[1340]: INFO : GET result: OK Feb 9 19:14:51.433255 ignition[1340]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:14:51.438219 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:14:51.438219 ignition[1340]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 19:14:51.855715 ignition[1340]: INFO : GET result: OK Feb 9 19:14:51.961847 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:14:51.967318 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:51.967318 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:14:51.967318 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:51.967318 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:14:51.967318 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:51.967318 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:52.014770 ignition[1340]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3398933354" Feb 9 19:14:52.014770 ignition[1340]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3398933354": device or resource busy Feb 9 19:14:52.014770 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3398933354", trying btrfs: device or resource busy Feb 9 19:14:52.014770 ignition[1340]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3398933354" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3398933354" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(6): [started] unmounting "/mnt/oem3398933354" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(6): [finished] unmounting "/mnt/oem3398933354" Feb 9 19:14:52.014770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:14:52.014770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:52.014770 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:52.014770 ignition[1340]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem416525198" Feb 9 19:14:52.014770 ignition[1340]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem416525198": device or resource busy Feb 9 19:14:52.014770 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem416525198", trying btrfs: device or resource busy Feb 9 19:14:52.014770 ignition[1340]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem416525198" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem416525198" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(9): [started] unmounting "/mnt/oem416525198" Feb 9 19:14:52.014770 ignition[1340]: INFO : op(9): [finished] unmounting "/mnt/oem416525198" Feb 9 19:14:52.014770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 19:14:52.014770 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:52.014770 ignition[1340]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:14:52.122939 kernel: audit: type=1130 audit(1707506092.063:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.058836 systemd[1]: Finished ignition-files.service. Feb 9 19:14:52.124931 ignition[1340]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2316568540" Feb 9 19:14:52.124931 ignition[1340]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2316568540": device or resource busy Feb 9 19:14:52.124931 ignition[1340]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2316568540", trying btrfs: device or resource busy Feb 9 19:14:52.124931 ignition[1340]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2316568540" Feb 9 19:14:52.124931 ignition[1340]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2316568540" Feb 9 19:14:52.124931 ignition[1340]: INFO : op(c): [started] unmounting "/mnt/oem2316568540" Feb 9 19:14:52.124931 ignition[1340]: INFO : op(c): [finished] unmounting "/mnt/oem2316568540" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(18): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(18): op(19): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(18): op(19): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:14:52.124931 ignition[1340]: INFO : files: op(18): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:14:52.202156 kernel: audit: type=1130 audit(1707506092.170:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.202203 kernel: audit: type=1131 audit(1707506092.170:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.076033 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(22): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(22): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 9 19:14:52.205258 ignition[1340]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:14:52.280974 kernel: audit: type=1130 audit(1707506092.255:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.106906 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:14:52.284970 ignition[1340]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:52.284970 ignition[1340]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:14:52.284970 ignition[1340]: INFO : files: files passed Feb 9 19:14:52.284970 ignition[1340]: INFO : Ignition finished successfully Feb 9 19:14:52.294652 initrd-setup-root-after-ignition[1365]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:14:52.109742 systemd[1]: Starting ignition-quench.service... Feb 9 19:14:52.171970 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:14:52.172182 systemd[1]: Finished ignition-quench.service. Feb 9 19:14:52.254065 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:14:52.257015 systemd[1]: Reached target ignition-complete.target. Feb 9 19:14:52.268324 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:14:52.331698 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:14:52.332302 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:14:52.351102 kernel: audit: type=1130 audit(1707506092.334:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.352636 kernel: audit: type=1131 audit(1707506092.334:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.343462 systemd[1]: Reached target initrd-fs.target. Feb 9 19:14:52.352682 systemd[1]: Reached target initrd.target. Feb 9 19:14:52.354455 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:14:52.356566 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:14:52.381258 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:14:52.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.387750 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:14:52.398117 kernel: audit: type=1130 audit(1707506092.384:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.408869 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:14:52.412362 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:14:52.415958 systemd[1]: Stopped target timers.target. Feb 9 19:14:52.418964 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:14:52.420311 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:14:52.437926 systemd[1]: Stopped target initrd.target. Feb 9 19:14:52.441014 systemd[1]: Stopped target basic.target. Feb 9 19:14:52.446202 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:14:52.449635 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:14:52.454490 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:14:52.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.457873 systemd[1]: Stopped target remote-fs.target. Feb 9 19:14:52.467995 kernel: audit: type=1131 audit(1707506092.436:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.468046 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:14:52.471483 systemd[1]: Stopped target sysinit.target. Feb 9 19:14:52.485895 systemd[1]: Stopped target local-fs.target. Feb 9 19:14:52.489019 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:14:52.492254 systemd[1]: Stopped target swap.target. Feb 9 19:14:52.495066 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:14:52.497198 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:14:52.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.514086 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:14:52.524013 kernel: audit: type=1131 audit(1707506092.512:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.524160 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:14:52.525138 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:14:52.538875 kernel: audit: type=1131 audit(1707506092.526:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.527926 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:14:52.528155 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:14:52.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.537763 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:14:52.566340 ignition[1378]: INFO : Ignition 2.14.0 Feb 9 19:14:52.566340 ignition[1378]: INFO : Stage: umount Feb 9 19:14:52.566340 ignition[1378]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:14:52.566340 ignition[1378]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 19:14:52.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.537953 systemd[1]: Stopped ignition-files.service. Feb 9 19:14:52.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.542718 systemd[1]: Stopping ignition-mount.service... Feb 9 19:14:52.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.554873 systemd[1]: Stopping iscsiuio.service... Feb 9 19:14:52.567637 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:14:52.567932 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:14:52.572506 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:14:52.591198 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:14:52.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.592224 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:14:52.594757 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:14:52.595058 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:14:52.602835 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:14:52.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.607200 systemd[1]: Stopped iscsiuio.service. Feb 9 19:14:52.614211 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:14:52.617268 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:14:52.617481 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:14:52.634334 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:14:52.634595 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:14:52.651428 ignition[1378]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 19:14:52.654181 ignition[1378]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 19:14:52.656523 ignition[1378]: INFO : PUT result: OK Feb 9 19:14:52.661971 ignition[1378]: INFO : umount: umount passed Feb 9 19:14:52.664362 ignition[1378]: INFO : Ignition finished successfully Feb 9 19:14:52.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.663932 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:14:52.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.664151 systemd[1]: Stopped ignition-mount.service. Feb 9 19:14:52.667625 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:14:52.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.667715 systemd[1]: Stopped ignition-disks.service. Feb 9 19:14:52.670989 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:14:52.671065 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:14:52.672712 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:14:52.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.672786 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:14:52.674418 systemd[1]: Stopped target network.target. Feb 9 19:14:52.676003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:14:52.676251 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:14:52.679184 systemd[1]: Stopped target paths.target. Feb 9 19:14:52.680646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:14:52.684625 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:14:52.685901 systemd[1]: Stopped target slices.target. Feb 9 19:14:52.690269 systemd[1]: Stopped target sockets.target. Feb 9 19:14:52.693431 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:14:52.693488 systemd[1]: Closed iscsid.socket. Feb 9 19:14:52.694853 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:14:52.694921 systemd[1]: Closed iscsiuio.socket. Feb 9 19:14:52.696325 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:14:52.696409 systemd[1]: Stopped ignition-setup.service. Feb 9 19:14:52.698029 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:14:52.698103 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:14:52.700135 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:14:52.701756 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:14:52.706646 systemd-networkd[1185]: eth0: DHCPv6 lease lost Feb 9 19:14:52.708958 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:14:52.709156 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:14:52.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.740879 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:14:52.742815 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:14:52.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.744000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:14:52.746448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:14:52.746542 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:14:52.748000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:14:52.752587 systemd[1]: Stopping network-cleanup.service... Feb 9 19:14:52.759661 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:14:52.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.759797 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:14:52.763081 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:14:52.763168 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:14:52.764958 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:14:52.765038 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:14:52.768853 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:14:52.772268 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:14:52.783227 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:14:52.783696 systemd[1]: Stopped network-cleanup.service. Feb 9 19:14:52.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.795571 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:14:52.796053 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:14:52.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.799669 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:14:52.799754 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:14:52.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.802343 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:14:52.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:52.802417 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:14:52.805676 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:14:52.805759 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:14:52.807421 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:14:52.807497 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:14:52.809119 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:14:52.809192 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:14:52.812111 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:14:52.819708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:14:52.819828 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:14:52.829166 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:14:52.829376 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:14:52.832830 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:14:52.895719 systemd-journald[308]: Received SIGTERM from PID 1 (n/a). Feb 9 19:14:52.895775 iscsid[1190]: iscsid shutting down. Feb 9 19:14:52.836045 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:14:52.852965 systemd[1]: Switching root. Feb 9 19:14:52.908217 systemd-journald[308]: Journal stopped Feb 9 19:14:58.013757 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:14:58.013877 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:14:58.013918 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:14:58.013991 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:14:58.014025 kernel: SELinux: policy capability open_perms=1 Feb 9 19:14:58.014056 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:14:58.014091 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:14:58.014123 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:14:58.014153 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:14:58.014183 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:14:58.014213 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:14:58.014245 systemd[1]: Successfully loaded SELinux policy in 95.751ms. Feb 9 19:14:58.014308 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.695ms. Feb 9 19:14:58.014343 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:14:58.014376 systemd[1]: Detected virtualization amazon. Feb 9 19:14:58.014410 systemd[1]: Detected architecture arm64. Feb 9 19:14:58.014442 systemd[1]: Detected first boot. Feb 9 19:14:58.014474 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:14:58.014505 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:14:58.014537 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:14:58.014599 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:14:58.014636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:14:58.014675 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:14:58.014716 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 19:14:58.014748 kernel: audit: type=1334 audit(1707506097.610:87): prog-id=12 op=LOAD Feb 9 19:14:58.014785 kernel: audit: type=1334 audit(1707506097.610:88): prog-id=3 op=UNLOAD Feb 9 19:14:58.014812 kernel: audit: type=1334 audit(1707506097.612:89): prog-id=13 op=LOAD Feb 9 19:14:58.014841 kernel: audit: type=1334 audit(1707506097.615:90): prog-id=14 op=LOAD Feb 9 19:14:58.014871 kernel: audit: type=1334 audit(1707506097.615:91): prog-id=4 op=UNLOAD Feb 9 19:14:58.014901 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:14:58.014931 kernel: audit: type=1334 audit(1707506097.615:92): prog-id=5 op=UNLOAD Feb 9 19:14:58.014963 systemd[1]: Stopped iscsid.service. Feb 9 19:14:58.014993 kernel: audit: type=1334 audit(1707506097.617:93): prog-id=15 op=LOAD Feb 9 19:14:58.015023 kernel: audit: type=1334 audit(1707506097.617:94): prog-id=12 op=UNLOAD Feb 9 19:14:58.015056 kernel: audit: type=1334 audit(1707506097.619:95): prog-id=16 op=LOAD Feb 9 19:14:58.015086 kernel: audit: type=1334 audit(1707506097.622:96): prog-id=17 op=LOAD Feb 9 19:14:58.015118 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:14:58.015156 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:14:58.015188 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:14:58.015220 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:14:58.015257 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:14:58.015288 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 19:14:58.015319 systemd[1]: Created slice system-getty.slice. Feb 9 19:14:58.015349 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:14:58.015381 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:14:58.015411 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:14:58.015442 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:14:58.015475 systemd[1]: Created slice user.slice. Feb 9 19:14:58.015506 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:14:58.015536 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:14:58.015584 systemd[1]: Set up automount boot.automount. Feb 9 19:14:58.015618 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:14:58.015648 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:14:58.015677 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:14:58.015707 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:14:58.015742 systemd[1]: Reached target integritysetup.target. Feb 9 19:14:58.015776 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:14:58.015808 systemd[1]: Reached target remote-fs.target. Feb 9 19:14:58.015839 systemd[1]: Reached target slices.target. Feb 9 19:14:58.017152 systemd[1]: Reached target swap.target. Feb 9 19:14:58.017186 systemd[1]: Reached target torcx.target. Feb 9 19:14:58.017219 systemd[1]: Reached target veritysetup.target. Feb 9 19:14:58.017249 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:14:58.017278 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:14:58.017309 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:14:58.017340 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:14:58.017375 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:14:58.017406 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:14:58.017437 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:14:58.017468 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:14:58.017497 systemd[1]: Mounting media.mount... Feb 9 19:14:58.017527 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:14:58.017615 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:14:58.017650 systemd[1]: Mounting tmp.mount... Feb 9 19:14:58.017682 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:14:58.017744 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:14:58.017810 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:14:58.017842 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:14:58.017876 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:14:58.017906 systemd[1]: Starting modprobe@drm.service... Feb 9 19:14:58.017935 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:14:58.017965 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:14:58.017994 systemd[1]: Starting modprobe@loop.service... Feb 9 19:14:58.018028 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:14:58.018064 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:14:58.018095 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:14:58.018124 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:14:58.018157 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:14:58.018186 systemd[1]: Stopped systemd-journald.service. Feb 9 19:14:58.018215 kernel: fuse: init (API version 7.34) Feb 9 19:14:58.018242 kernel: loop: module loaded Feb 9 19:14:58.018271 systemd[1]: Starting systemd-journald.service... Feb 9 19:14:58.018303 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:14:58.018337 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:14:58.018367 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:14:58.018398 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:14:58.018431 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:14:58.018466 systemd[1]: Stopped verity-setup.service. Feb 9 19:14:58.018495 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:14:58.018524 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:14:58.018595 systemd[1]: Mounted media.mount. Feb 9 19:14:58.018629 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:14:58.018664 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:14:58.018703 systemd[1]: Mounted tmp.mount. Feb 9 19:14:58.018737 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:14:58.018768 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:14:58.018801 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:14:58.018834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:14:58.018864 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:14:58.018895 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:14:58.018925 systemd[1]: Finished modprobe@drm.service. Feb 9 19:14:58.018956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:14:58.018991 systemd-journald[1489]: Journal started Feb 9 19:14:58.019087 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec2e76cedf484bb0260609000e6fb769) is 8.0M, max 75.4M, 67.4M free. Feb 9 19:14:53.570000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:14:53.708000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:53.708000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:14:53.708000 audit: BPF prog-id=10 op=LOAD Feb 9 19:14:53.708000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:14:53.708000 audit: BPF prog-id=11 op=LOAD Feb 9 19:14:53.708000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:14:53.873000 audit[1411]: AVC avc: denied { associate } for pid=1411 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:14:53.873000 audit[1411]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458d4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:53.873000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:14:53.877000 audit[1411]: AVC avc: denied { associate } for pid=1411 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:14:53.877000 audit[1411]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459b9 a2=1ed a3=0 items=2 ppid=1394 pid=1411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:53.877000 audit: CWD cwd="/" Feb 9 19:14:53.877000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:14:53.877000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:14:53.877000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:14:57.610000 audit: BPF prog-id=12 op=LOAD Feb 9 19:14:57.610000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:14:57.612000 audit: BPF prog-id=13 op=LOAD Feb 9 19:14:57.615000 audit: BPF prog-id=14 op=LOAD Feb 9 19:14:57.615000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:14:57.615000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:14:57.617000 audit: BPF prog-id=15 op=LOAD Feb 9 19:14:57.617000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:14:57.619000 audit: BPF prog-id=16 op=LOAD Feb 9 19:14:57.622000 audit: BPF prog-id=17 op=LOAD Feb 9 19:14:57.622000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:14:57.622000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:14:57.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.639000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:14:57.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.027110 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:14:57.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.907000 audit: BPF prog-id=18 op=LOAD Feb 9 19:14:57.908000 audit: BPF prog-id=19 op=LOAD Feb 9 19:14:57.908000 audit: BPF prog-id=20 op=LOAD Feb 9 19:14:57.908000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:14:57.908000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:14:57.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.003000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:14:58.003000 audit[1489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd3ad1ee0 a2=4000 a3=1 items=0 ppid=1 pid=1489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:14:58.003000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:14:58.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:57.607760 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:14:53.855502 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:14:57.625277 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:14:53.863620 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:14:53.863675 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:14:53.863773 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:14:53.863801 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:14:53.863869 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:14:53.863952 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:14:53.864816 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:14:53.864911 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:14:53.864948 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:14:58.031679 systemd[1]: Started systemd-journald.service. Feb 9 19:14:53.874357 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:14:58.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:53.874447 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:14:58.032896 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:14:53.874495 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:14:58.033193 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:14:53.874537 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:14:58.035607 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:14:53.874619 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:14:58.035901 systemd[1]: Finished modprobe@loop.service. Feb 9 19:14:53.874658 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:14:58.038306 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:14:56.787975 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.788522 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.788787 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.789228 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:14:56.789333 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:14:56.789471 /usr/lib/systemd/system-generators/torcx-generator[1411]: time="2024-02-09T19:14:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:14:58.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.053638 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:14:58.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.056494 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:14:58.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.060031 systemd[1]: Reached target network-pre.target. Feb 9 19:14:58.064398 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:14:58.074356 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:14:58.076985 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:14:58.082882 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:14:58.086921 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:14:58.088750 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:14:58.090961 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:14:58.092841 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:14:58.095384 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:14:58.103070 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:14:58.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.105175 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:14:58.108189 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:14:58.126680 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:14:58.135443 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec2e76cedf484bb0260609000e6fb769 is 62.738ms for 1176 entries. Feb 9 19:14:58.135443 systemd-journald[1489]: System Journal (/var/log/journal/ec2e76cedf484bb0260609000e6fb769) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:14:58.247871 systemd-journald[1489]: Received client request to flush runtime journal. Feb 9 19:14:58.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.153873 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:14:58.156047 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:14:58.178204 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:14:58.249897 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:14:58.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.267872 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:14:58.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.272123 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:14:58.283749 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:14:58.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:58.295707 udevadm[1529]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:14:59.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.031000 audit: BPF prog-id=21 op=LOAD Feb 9 19:14:59.031000 audit: BPF prog-id=22 op=LOAD Feb 9 19:14:59.031000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:14:59.031000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:14:59.029375 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:14:59.034097 systemd[1]: Starting systemd-udevd.service... Feb 9 19:14:59.069947 systemd-udevd[1530]: Using default interface naming scheme 'v252'. Feb 9 19:14:59.121534 systemd[1]: Started systemd-udevd.service. Feb 9 19:14:59.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.123000 audit: BPF prog-id=23 op=LOAD Feb 9 19:14:59.128900 systemd[1]: Starting systemd-networkd.service... Feb 9 19:14:59.136000 audit: BPF prog-id=24 op=LOAD Feb 9 19:14:59.137000 audit: BPF prog-id=25 op=LOAD Feb 9 19:14:59.137000 audit: BPF prog-id=26 op=LOAD Feb 9 19:14:59.140145 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:14:59.220248 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:14:59.237283 systemd[1]: Started systemd-userdbd.service. Feb 9 19:14:59.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.254972 (udev-worker)[1536]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:14:59.385250 systemd-networkd[1537]: lo: Link UP Feb 9 19:14:59.385272 systemd-networkd[1537]: lo: Gained carrier Feb 9 19:14:59.386278 systemd-networkd[1537]: Enumeration completed Feb 9 19:14:59.386445 systemd[1]: Started systemd-networkd.service. Feb 9 19:14:59.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.390291 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:14:59.393629 systemd-networkd[1537]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:14:59.398595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:14:59.399078 systemd-networkd[1537]: eth0: Link UP Feb 9 19:14:59.399372 systemd-networkd[1537]: eth0: Gained carrier Feb 9 19:14:59.413941 systemd-networkd[1537]: eth0: DHCPv4 address 172.31.28.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 19:14:59.472638 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1544) Feb 9 19:14:59.584797 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:14:59.587265 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:14:59.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.595316 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:14:59.636455 lvm[1649]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:59.672218 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:14:59.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.674255 systemd[1]: Reached target cryptsetup.target. Feb 9 19:14:59.678172 systemd[1]: Starting lvm2-activation.service... Feb 9 19:14:59.686415 lvm[1650]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:14:59.720238 systemd[1]: Finished lvm2-activation.service. Feb 9 19:14:59.722169 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:14:59.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.723939 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:14:59.723988 systemd[1]: Reached target local-fs.target. Feb 9 19:14:59.725655 systemd[1]: Reached target machines.target. Feb 9 19:14:59.729485 systemd[1]: Starting ldconfig.service... Feb 9 19:14:59.732333 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.732494 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:14:59.734812 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:14:59.739543 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:14:59.747833 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:14:59.749934 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.750066 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:14:59.752844 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:14:59.760654 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1652 (bootctl) Feb 9 19:14:59.763223 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:14:59.790797 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:14:59.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.806080 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:14:59.816154 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:14:59.834738 systemd-tmpfiles[1655]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:14:59.884323 systemd-fsck[1660]: fsck.fat 4.2 (2021-01-31) Feb 9 19:14:59.884323 systemd-fsck[1660]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 19:14:59.889266 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:14:59.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:14:59.894082 systemd[1]: Mounting boot.mount... Feb 9 19:14:59.927933 systemd[1]: Mounted boot.mount. Feb 9 19:14:59.958768 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:14:59.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.202541 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:15:00.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.207096 systemd[1]: Starting audit-rules.service... Feb 9 19:15:00.212077 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:15:00.217443 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:15:00.220000 audit: BPF prog-id=27 op=LOAD Feb 9 19:15:00.227028 systemd[1]: Starting systemd-resolved.service... Feb 9 19:15:00.228000 audit: BPF prog-id=28 op=LOAD Feb 9 19:15:00.234455 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:15:00.238886 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:15:00.245481 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:15:00.247668 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:15:00.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.278000 audit[1683]: SYSTEM_BOOT pid=1683 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.286959 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:15:00.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:15:00.332531 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:15:00.374000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:15:00.374000 audit[1697]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc858c40 a2=420 a3=0 items=0 ppid=1677 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:15:00.374000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:15:00.376893 augenrules[1697]: No rules Feb 9 19:15:00.378364 systemd[1]: Finished audit-rules.service. Feb 9 19:15:00.413676 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:15:00.415645 systemd[1]: Reached target time-set.target. Feb 9 19:15:00.451374 systemd-resolved[1681]: Positive Trust Anchors: Feb 9 19:15:00.452226 systemd-resolved[1681]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:15:00.452283 systemd-resolved[1681]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:15:00.520743 systemd-networkd[1537]: eth0: Gained IPv6LL Feb 9 19:15:00.524438 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:15:00.546847 systemd-timesyncd[1682]: Contacted time server 50.203.248.23:123 (0.flatcar.pool.ntp.org). Feb 9 19:15:00.546970 systemd-timesyncd[1682]: Initial clock synchronization to Fri 2024-02-09 19:15:00.869238 UTC. Feb 9 19:15:00.697856 systemd-resolved[1681]: Defaulting to hostname 'linux'. Feb 9 19:15:00.700761 systemd[1]: Started systemd-resolved.service. Feb 9 19:15:00.702696 systemd[1]: Reached target network.target. Feb 9 19:15:00.704420 systemd[1]: Reached target network-online.target. Feb 9 19:15:00.706183 systemd[1]: Reached target nss-lookup.target. Feb 9 19:15:01.456492 ldconfig[1651]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:15:01.467074 systemd[1]: Finished ldconfig.service. Feb 9 19:15:01.471130 systemd[1]: Starting systemd-update-done.service... Feb 9 19:15:01.494616 systemd[1]: Finished systemd-update-done.service. Feb 9 19:15:01.496735 systemd[1]: Reached target sysinit.target. Feb 9 19:15:01.498666 systemd[1]: Started motdgen.path. Feb 9 19:15:01.500550 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:15:01.503239 systemd[1]: Started logrotate.timer. Feb 9 19:15:01.505011 systemd[1]: Started mdadm.timer. Feb 9 19:15:01.506461 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:15:01.508295 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:15:01.508348 systemd[1]: Reached target paths.target. Feb 9 19:15:01.510394 systemd[1]: Reached target timers.target. Feb 9 19:15:01.512531 systemd[1]: Listening on dbus.socket. Feb 9 19:15:01.516403 systemd[1]: Starting docker.socket... Feb 9 19:15:01.523404 systemd[1]: Listening on sshd.socket. Feb 9 19:15:01.525320 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:15:01.526214 systemd[1]: Listening on docker.socket. Feb 9 19:15:01.528105 systemd[1]: Reached target sockets.target. Feb 9 19:15:01.529837 systemd[1]: Reached target basic.target. Feb 9 19:15:01.531603 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:15:01.531663 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:15:01.533898 systemd[1]: Started amazon-ssm-agent.service. Feb 9 19:15:01.538378 systemd[1]: Starting containerd.service... Feb 9 19:15:01.543249 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 19:15:01.557926 systemd[1]: Starting dbus.service... Feb 9 19:15:01.588795 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:15:01.652967 jq[1717]: false Feb 9 19:15:01.595152 systemd[1]: Starting extend-filesystems.service... Feb 9 19:15:01.596946 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:15:01.599358 systemd[1]: Starting motdgen.service... Feb 9 19:15:01.603537 systemd[1]: Started nvidia.service. Feb 9 19:15:01.608013 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:15:01.612745 systemd[1]: Starting prepare-critools.service... Feb 9 19:15:01.616813 systemd[1]: Starting prepare-helm.service... Feb 9 19:15:01.621940 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:15:01.626344 systemd[1]: Starting sshd-keygen.service... Feb 9 19:15:01.693078 jq[1730]: true Feb 9 19:15:01.635921 systemd[1]: Starting systemd-logind.service... Feb 9 19:15:01.637590 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:15:01.637729 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:15:01.638675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:15:01.748023 tar[1734]: ./ Feb 9 19:15:01.748023 tar[1734]: ./macvlan Feb 9 19:15:01.651044 systemd[1]: Starting update-engine.service... Feb 9 19:15:01.785477 jq[1735]: true Feb 9 19:15:01.655073 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:15:01.785992 tar[1736]: linux-arm64/helm Feb 9 19:15:01.659737 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:15:01.664620 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:15:01.787054 tar[1742]: crictl Feb 9 19:15:01.670448 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:15:01.670919 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:15:01.759363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:15:01.759720 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:15:01.854241 dbus-daemon[1716]: [system] SELinux support is enabled Feb 9 19:15:01.857974 systemd[1]: Started dbus.service. Feb 9 19:15:01.863083 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:15:01.863151 systemd[1]: Reached target system-config.target. Feb 9 19:15:01.865076 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:15:01.865120 systemd[1]: Reached target user-config.target. Feb 9 19:15:01.886024 extend-filesystems[1718]: Found nvme0n1 Feb 9 19:15:01.893996 extend-filesystems[1718]: Found nvme0n1p2 Feb 9 19:15:01.897112 extend-filesystems[1718]: Found nvme0n1p1 Feb 9 19:15:01.899494 extend-filesystems[1718]: Found nvme0n1p3 Feb 9 19:15:01.901905 extend-filesystems[1718]: Found usr Feb 9 19:15:01.905949 extend-filesystems[1718]: Found nvme0n1p4 Feb 9 19:15:01.908433 extend-filesystems[1718]: Found nvme0n1p6 Feb 9 19:15:01.910827 dbus-daemon[1716]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1537 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 19:15:01.911573 extend-filesystems[1718]: Found nvme0n1p7 Feb 9 19:15:01.914629 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:15:01.914962 extend-filesystems[1718]: Found nvme0n1p9 Feb 9 19:15:01.916924 extend-filesystems[1718]: Checking size of /dev/nvme0n1p9 Feb 9 19:15:01.923114 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:15:01.923484 systemd[1]: Finished motdgen.service. Feb 9 19:15:01.932564 systemd[1]: Starting systemd-hostnamed.service... Feb 9 19:15:02.014255 update_engine[1729]: I0209 19:15:02.013708 1729 main.cc:92] Flatcar Update Engine starting Feb 9 19:15:02.015379 extend-filesystems[1718]: Resized partition /dev/nvme0n1p9 Feb 9 19:15:02.026313 systemd[1]: Started update-engine.service. Feb 9 19:15:02.028143 update_engine[1729]: I0209 19:15:02.027424 1729 update_check_scheduler.cc:74] Next update check in 5m10s Feb 9 19:15:02.036397 extend-filesystems[1781]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:15:02.039296 systemd[1]: Started locksmithd.service. Feb 9 19:15:02.050027 bash[1777]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:15:02.051503 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:15:02.083629 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 19:15:02.117346 amazon-ssm-agent[1713]: 2024/02/09 19:15:02 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 19:15:02.134759 amazon-ssm-agent[1713]: Initializing new seelog logger Feb 9 19:15:02.140741 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 19:15:02.163954 amazon-ssm-agent[1713]: New Seelog Logger Creation Complete Feb 9 19:15:02.163954 amazon-ssm-agent[1713]: 2024/02/09 19:15:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:15:02.163954 amazon-ssm-agent[1713]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 19:15:02.168911 amazon-ssm-agent[1713]: 2024/02/09 19:15:02 processing appconfig overrides Feb 9 19:15:02.169053 extend-filesystems[1781]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 19:15:02.169053 extend-filesystems[1781]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:15:02.169053 extend-filesystems[1781]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 19:15:02.191730 extend-filesystems[1718]: Resized filesystem in /dev/nvme0n1p9 Feb 9 19:15:02.188034 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:15:02.203785 env[1740]: time="2024-02-09T19:15:02.200639938Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:15:02.188396 systemd[1]: Finished extend-filesystems.service. Feb 9 19:15:02.234384 tar[1734]: ./static Feb 9 19:15:02.248550 systemd-logind[1727]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 19:15:02.255875 systemd-logind[1727]: New seat seat0. Feb 9 19:15:02.268854 systemd[1]: Started systemd-logind.service. Feb 9 19:15:02.342281 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:15:02.416538 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 19:15:02.416823 systemd[1]: Started systemd-hostnamed.service. Feb 9 19:15:02.424121 dbus-daemon[1716]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1772 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 19:15:02.429068 systemd[1]: Starting polkit.service... Feb 9 19:15:02.445547 env[1740]: time="2024-02-09T19:15:02.445484142Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:15:02.455064 env[1740]: time="2024-02-09T19:15:02.455000110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.466134 polkitd[1811]: Started polkitd version 121 Feb 9 19:15:02.469274 tar[1734]: ./vlan Feb 9 19:15:02.487282 env[1740]: time="2024-02-09T19:15:02.487201849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:02.487547 env[1740]: time="2024-02-09T19:15:02.487510272Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.488346 env[1740]: time="2024-02-09T19:15:02.488297267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:02.488524 env[1740]: time="2024-02-09T19:15:02.488492463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.488677 env[1740]: time="2024-02-09T19:15:02.488644252Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:15:02.489133 env[1740]: time="2024-02-09T19:15:02.489096214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.489846 env[1740]: time="2024-02-09T19:15:02.489782556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.491630 env[1740]: time="2024-02-09T19:15:02.491580131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:15:02.496183 env[1740]: time="2024-02-09T19:15:02.496102275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:15:02.497515 env[1740]: time="2024-02-09T19:15:02.497439552Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:15:02.499351 polkitd[1811]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 19:15:02.499487 polkitd[1811]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 19:15:02.500230 env[1740]: time="2024-02-09T19:15:02.500155574Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:15:02.502716 env[1740]: time="2024-02-09T19:15:02.502648199Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:15:02.505682 polkitd[1811]: Finished loading, compiling and executing 2 rules Feb 9 19:15:02.508652 dbus-daemon[1716]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 19:15:02.508943 systemd[1]: Started polkit.service. Feb 9 19:15:02.511900 polkitd[1811]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 19:15:02.523619 env[1740]: time="2024-02-09T19:15:02.523452340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:15:02.523850 env[1740]: time="2024-02-09T19:15:02.523803971Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:15:02.524086 env[1740]: time="2024-02-09T19:15:02.524039667Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:15:02.524312 env[1740]: time="2024-02-09T19:15:02.524247983Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.524468 env[1740]: time="2024-02-09T19:15:02.524437141Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.524819 env[1740]: time="2024-02-09T19:15:02.524781156Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.525002 env[1740]: time="2024-02-09T19:15:02.524970240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.525858 env[1740]: time="2024-02-09T19:15:02.525815489Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.526040 env[1740]: time="2024-02-09T19:15:02.526009306Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.526190 env[1740]: time="2024-02-09T19:15:02.526159206Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.526341 env[1740]: time="2024-02-09T19:15:02.526309306Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.526523 env[1740]: time="2024-02-09T19:15:02.526492799Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:15:02.526978 env[1740]: time="2024-02-09T19:15:02.526942624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:15:02.527436 env[1740]: time="2024-02-09T19:15:02.527396251Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:15:02.528123 env[1740]: time="2024-02-09T19:15:02.528083487Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:15:02.529395 env[1740]: time="2024-02-09T19:15:02.529304667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.530002 env[1740]: time="2024-02-09T19:15:02.529958360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:15:02.530813 env[1740]: time="2024-02-09T19:15:02.530491174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.531191 env[1740]: time="2024-02-09T19:15:02.531153178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.531781 env[1740]: time="2024-02-09T19:15:02.531738853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.534198 env[1740]: time="2024-02-09T19:15:02.534146180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.536613 env[1740]: time="2024-02-09T19:15:02.536527541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.536876 env[1740]: time="2024-02-09T19:15:02.536841306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.539157 env[1740]: time="2024-02-09T19:15:02.539108894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.539462 env[1740]: time="2024-02-09T19:15:02.539395787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.539755 env[1740]: time="2024-02-09T19:15:02.539706496Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:15:02.543401 env[1740]: time="2024-02-09T19:15:02.543301124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.543673 env[1740]: time="2024-02-09T19:15:02.543640232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.543827 env[1740]: time="2024-02-09T19:15:02.543796829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.543975 env[1740]: time="2024-02-09T19:15:02.543945487Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:15:02.544133 env[1740]: time="2024-02-09T19:15:02.544093536Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:15:02.544279 env[1740]: time="2024-02-09T19:15:02.544248803Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:15:02.544436 env[1740]: time="2024-02-09T19:15:02.544404456Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:15:02.544681 env[1740]: time="2024-02-09T19:15:02.544647270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:15:02.545780 env[1740]: time="2024-02-09T19:15:02.545622617Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:15:02.549501 env[1740]: time="2024-02-09T19:15:02.549405186Z" level=info msg="Connect containerd service" Feb 9 19:15:02.554989 env[1740]: time="2024-02-09T19:15:02.554927287Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:15:02.556699 env[1740]: time="2024-02-09T19:15:02.556621040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:15:02.559950 systemd-resolved[1681]: System hostname changed to 'ip-172-31-28-78'. Feb 9 19:15:02.559951 systemd-hostnamed[1772]: Hostname set to (transient) Feb 9 19:15:02.563835 env[1740]: time="2024-02-09T19:15:02.563755534Z" level=info msg="Start subscribing containerd event" Feb 9 19:15:02.570725 env[1740]: time="2024-02-09T19:15:02.570667699Z" level=info msg="Start recovering state" Feb 9 19:15:02.571010 env[1740]: time="2024-02-09T19:15:02.570982048Z" level=info msg="Start event monitor" Feb 9 19:15:02.571158 env[1740]: time="2024-02-09T19:15:02.571127612Z" level=info msg="Start snapshots syncer" Feb 9 19:15:02.571299 env[1740]: time="2024-02-09T19:15:02.571270183Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:15:02.571430 env[1740]: time="2024-02-09T19:15:02.571402467Z" level=info msg="Start streaming server" Feb 9 19:15:02.572364 env[1740]: time="2024-02-09T19:15:02.572301074Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:15:02.572733 env[1740]: time="2024-02-09T19:15:02.572702560Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:15:02.588386 env[1740]: time="2024-02-09T19:15:02.588335212Z" level=info msg="containerd successfully booted in 0.427947s" Feb 9 19:15:02.588466 systemd[1]: Started containerd.service. Feb 9 19:15:02.715246 tar[1734]: ./portmap Feb 9 19:15:02.833499 coreos-metadata[1715]: Feb 09 19:15:02.833 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 19:15:02.834976 coreos-metadata[1715]: Feb 09 19:15:02.834 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 19:15:02.836261 coreos-metadata[1715]: Feb 09 19:15:02.835 INFO Fetch successful Feb 9 19:15:02.836261 coreos-metadata[1715]: Feb 09 19:15:02.836 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 19:15:02.839630 coreos-metadata[1715]: Feb 09 19:15:02.839 INFO Fetch successful Feb 9 19:15:02.845507 unknown[1715]: wrote ssh authorized keys file for user: core Feb 9 19:15:02.891650 update-ssh-keys[1884]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:15:02.892698 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 19:15:02.915374 tar[1734]: ./host-local Feb 9 19:15:03.080116 tar[1734]: ./vrf Feb 9 19:15:03.088281 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Create new startup processor Feb 9 19:15:03.100347 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 19:15:03.108656 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing bookkeeping folders Feb 9 19:15:03.109418 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO removing the completed state files Feb 9 19:15:03.109595 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing bookkeeping folders for long running plugins Feb 9 19:15:03.109747 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 19:15:03.109946 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing healthcheck folders for long running plugins Feb 9 19:15:03.110124 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing locations for inventory plugin Feb 9 19:15:03.111774 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing default location for custom inventory Feb 9 19:15:03.117501 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing default location for file inventory Feb 9 19:15:03.118110 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Initializing default location for role inventory Feb 9 19:15:03.118254 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Init the cloudwatchlogs publisher Feb 9 19:15:03.118392 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 19:15:03.118529 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 19:15:03.118673 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 19:15:03.118809 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:downloadContent Feb 9 19:15:03.118970 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:runDocument Feb 9 19:15:03.119111 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 19:15:03.119247 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 19:15:03.119391 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:configureDocker Feb 9 19:15:03.119525 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform independent plugin aws:configurePackage Feb 9 19:15:03.119665 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 19:15:03.119827 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 19:15:03.119966 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO OS: linux, Arch: arm64 Feb 9 19:15:03.125071 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:15:03.134729 amazon-ssm-agent[1713]: datastore file /var/lib/amazon/ssm/i-0343ed934ed0c69f5/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 19:15:03.223117 tar[1734]: ./bridge Feb 9 19:15:03.232696 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 19:15:03.327810 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 19:15:03.376501 tar[1734]: ./tuning Feb 9 19:15:03.422358 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 19:15:03.435332 tar[1734]: ./firewall Feb 9 19:15:03.517099 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] Starting message polling Feb 9 19:15:03.541372 tar[1734]: ./host-device Feb 9 19:15:03.612043 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 19:15:03.664668 tar[1734]: ./sbr Feb 9 19:15:03.707169 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [instanceID=i-0343ed934ed0c69f5] Starting association polling Feb 9 19:15:03.764100 tar[1734]: ./loopback Feb 9 19:15:03.802521 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 19:15:03.863408 tar[1734]: ./dhcp Feb 9 19:15:03.898035 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 19:15:03.993782 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 19:15:04.065749 systemd[1]: Finished prepare-critools.service. Feb 9 19:15:04.089761 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 19:15:04.141883 tar[1734]: ./ptp Feb 9 19:15:04.185888 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 19:15:04.218215 tar[1736]: linux-arm64/LICENSE Feb 9 19:15:04.218928 tar[1736]: linux-arm64/README.md Feb 9 19:15:04.235450 systemd[1]: Finished prepare-helm.service. Feb 9 19:15:04.252313 tar[1734]: ./ipvlan Feb 9 19:15:04.282188 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 19:15:04.314409 tar[1734]: ./bandwidth Feb 9 19:15:04.379353 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 19:15:04.392075 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:15:04.476106 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 19:15:04.504363 locksmithd[1782]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:15:04.573011 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0343ed934ed0c69f5, requestId: c1dd9fe2-67c5-484d-b227-a08d28bd3bdd Feb 9 19:15:04.671458 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [OfflineService] Starting document processing engine... Feb 9 19:15:04.770931 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [OfflineService] [EngineProcessor] Starting Feb 9 19:15:04.870822 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 19:15:04.969909 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [OfflineService] Starting message polling Feb 9 19:15:05.070104 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [OfflineService] Starting send replies to MDS Feb 9 19:15:05.145892 sshd_keygen[1753]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:15:05.168167 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 19:15:05.182375 systemd[1]: Finished sshd-keygen.service. Feb 9 19:15:05.187005 systemd[1]: Starting issuegen.service... Feb 9 19:15:05.198403 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:15:05.198811 systemd[1]: Finished issuegen.service. Feb 9 19:15:05.203451 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:15:05.216982 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:15:05.222075 systemd[1]: Started getty@tty1.service. Feb 9 19:15:05.226543 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:15:05.228924 systemd[1]: Reached target getty.target. Feb 9 19:15:05.230773 systemd[1]: Reached target multi-user.target. Feb 9 19:15:05.235350 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:15:05.251406 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:15:05.251786 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:15:05.254016 systemd[1]: Startup finished in 1.134s (kernel) + 11.851s (initrd) + 11.810s (userspace) = 24.796s. Feb 9 19:15:05.266462 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 19:15:05.364907 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] listening reply. Feb 9 19:15:05.465822 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 19:15:05.565074 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [StartupProcessor] Executing startup processor tasks Feb 9 19:15:05.664233 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 19:15:05.765732 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 19:15:05.865835 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 19:15:05.967775 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0343ed934ed0c69f5?role=subscribe&stream=input Feb 9 19:15:06.067619 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0343ed934ed0c69f5?role=subscribe&stream=input Feb 9 19:15:06.169218 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 19:15:06.269803 amazon-ssm-agent[1713]: 2024-02-09 19:15:03 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 19:15:07.203541 amazon-ssm-agent[1713]: 2024-02-09 19:15:07 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 19:15:10.211214 systemd[1]: Created slice system-sshd.slice. Feb 9 19:15:10.213531 systemd[1]: Started sshd@0-172.31.28.78:22-147.75.109.163:43114.service. Feb 9 19:15:10.402962 sshd[1931]: Accepted publickey for core from 147.75.109.163 port 43114 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:10.405874 sshd[1931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:10.423184 systemd[1]: Created slice user-500.slice. Feb 9 19:15:10.425619 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:15:10.434663 systemd-logind[1727]: New session 1 of user core. Feb 9 19:15:10.445545 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:15:10.448773 systemd[1]: Starting user@500.service... Feb 9 19:15:10.455187 (systemd)[1934]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:10.628802 systemd[1934]: Queued start job for default target default.target. Feb 9 19:15:10.629842 systemd[1934]: Reached target paths.target. Feb 9 19:15:10.629894 systemd[1934]: Reached target sockets.target. Feb 9 19:15:10.629927 systemd[1934]: Reached target timers.target. Feb 9 19:15:10.629957 systemd[1934]: Reached target basic.target. Feb 9 19:15:10.630048 systemd[1934]: Reached target default.target. Feb 9 19:15:10.630113 systemd[1934]: Startup finished in 163ms. Feb 9 19:15:10.630127 systemd[1]: Started user@500.service. Feb 9 19:15:10.632145 systemd[1]: Started session-1.scope. Feb 9 19:15:10.787872 systemd[1]: Started sshd@1-172.31.28.78:22-147.75.109.163:43126.service. Feb 9 19:15:10.956681 sshd[1943]: Accepted publickey for core from 147.75.109.163 port 43126 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:10.959157 sshd[1943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:10.967691 systemd[1]: Started session-2.scope. Feb 9 19:15:10.968616 systemd-logind[1727]: New session 2 of user core. Feb 9 19:15:11.100049 sshd[1943]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:11.105503 systemd-logind[1727]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:15:11.106778 systemd[1]: sshd@1-172.31.28.78:22-147.75.109.163:43126.service: Deactivated successfully. Feb 9 19:15:11.108025 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:15:11.109436 systemd-logind[1727]: Removed session 2. Feb 9 19:15:11.128927 systemd[1]: Started sshd@2-172.31.28.78:22-147.75.109.163:43142.service. Feb 9 19:15:11.302298 sshd[1949]: Accepted publickey for core from 147.75.109.163 port 43142 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.305287 sshd[1949]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.313443 systemd[1]: Started session-3.scope. Feb 9 19:15:11.314631 systemd-logind[1727]: New session 3 of user core. Feb 9 19:15:11.437366 sshd[1949]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:11.442802 systemd[1]: sshd@2-172.31.28.78:22-147.75.109.163:43142.service: Deactivated successfully. Feb 9 19:15:11.443171 systemd-logind[1727]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:15:11.443998 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:15:11.446389 systemd-logind[1727]: Removed session 3. Feb 9 19:15:11.465155 systemd[1]: Started sshd@3-172.31.28.78:22-147.75.109.163:43152.service. Feb 9 19:15:11.637272 sshd[1955]: Accepted publickey for core from 147.75.109.163 port 43152 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.640230 sshd[1955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.648454 systemd[1]: Started session-4.scope. Feb 9 19:15:11.649662 systemd-logind[1727]: New session 4 of user core. Feb 9 19:15:11.780474 sshd[1955]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:11.785831 systemd-logind[1727]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:15:11.785998 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:15:11.787100 systemd[1]: sshd@3-172.31.28.78:22-147.75.109.163:43152.service: Deactivated successfully. Feb 9 19:15:11.789073 systemd-logind[1727]: Removed session 4. Feb 9 19:15:11.809019 systemd[1]: Started sshd@4-172.31.28.78:22-147.75.109.163:43168.service. Feb 9 19:15:11.984801 sshd[1961]: Accepted publickey for core from 147.75.109.163 port 43168 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:15:11.987237 sshd[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:15:11.995319 systemd-logind[1727]: New session 5 of user core. Feb 9 19:15:11.996222 systemd[1]: Started session-5.scope. Feb 9 19:15:12.118250 sudo[1964]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:15:12.119240 sudo[1964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:15:12.809718 systemd[1]: Starting docker.service... Feb 9 19:15:12.881673 env[1979]: time="2024-02-09T19:15:12.881575972Z" level=info msg="Starting up" Feb 9 19:15:12.886202 env[1979]: time="2024-02-09T19:15:12.886155441Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:12.886408 env[1979]: time="2024-02-09T19:15:12.886378799Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:12.886538 env[1979]: time="2024-02-09T19:15:12.886506272Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:12.887339 env[1979]: time="2024-02-09T19:15:12.887303067Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:12.890715 env[1979]: time="2024-02-09T19:15:12.890651403Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:15:12.890715 env[1979]: time="2024-02-09T19:15:12.890696906Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:15:12.890927 env[1979]: time="2024-02-09T19:15:12.890735093Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:15:12.890927 env[1979]: time="2024-02-09T19:15:12.890760793Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:15:12.899277 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2532510948-merged.mount: Deactivated successfully. Feb 9 19:15:12.979526 env[1979]: time="2024-02-09T19:15:12.979477188Z" level=info msg="Loading containers: start." Feb 9 19:15:13.142752 kernel: Initializing XFRM netlink socket Feb 9 19:15:13.183211 env[1979]: time="2024-02-09T19:15:13.183166022Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:15:13.187274 (udev-worker)[1989]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:15:13.281122 systemd-networkd[1537]: docker0: Link UP Feb 9 19:15:13.299827 env[1979]: time="2024-02-09T19:15:13.299781667Z" level=info msg="Loading containers: done." Feb 9 19:15:13.319942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1050032549-merged.mount: Deactivated successfully. Feb 9 19:15:13.338770 env[1979]: time="2024-02-09T19:15:13.338698665Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:15:13.339411 env[1979]: time="2024-02-09T19:15:13.339360432Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:15:13.339794 env[1979]: time="2024-02-09T19:15:13.339767051Z" level=info msg="Daemon has completed initialization" Feb 9 19:15:13.363844 systemd[1]: Started docker.service. Feb 9 19:15:13.373463 env[1979]: time="2024-02-09T19:15:13.373367676Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:15:13.408625 systemd[1]: Reloading. Feb 9 19:15:13.534656 /usr/lib/systemd/system-generators/torcx-generator[2116]: time="2024-02-09T19:15:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:13.536731 /usr/lib/systemd/system-generators/torcx-generator[2116]: time="2024-02-09T19:15:13Z" level=info msg="torcx already run" Feb 9 19:15:13.697128 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:13.697169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:13.735703 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:13.938965 systemd[1]: Started kubelet.service. Feb 9 19:15:14.093933 kubelet[2170]: E0209 19:15:14.093851 2170 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:14.098467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:14.098854 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:14.520988 env[1740]: time="2024-02-09T19:15:14.520926219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:15:15.260161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63401579.mount: Deactivated successfully. Feb 9 19:15:17.581661 env[1740]: time="2024-02-09T19:15:17.581601004Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.584388 env[1740]: time="2024-02-09T19:15:17.584339057Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.589132 env[1740]: time="2024-02-09T19:15:17.589082066Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.594152 env[1740]: time="2024-02-09T19:15:17.594104450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:17.596114 env[1740]: time="2024-02-09T19:15:17.596060165Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 19:15:17.614515 env[1740]: time="2024-02-09T19:15:17.614441677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:15:20.008247 env[1740]: time="2024-02-09T19:15:20.008150353Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.029741 env[1740]: time="2024-02-09T19:15:20.029675660Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.034356 env[1740]: time="2024-02-09T19:15:20.034303412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.039228 env[1740]: time="2024-02-09T19:15:20.039165838Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:20.040728 env[1740]: time="2024-02-09T19:15:20.040659138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 19:15:20.058220 env[1740]: time="2024-02-09T19:15:20.058170668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:15:21.542233 env[1740]: time="2024-02-09T19:15:21.542177252Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.546430 env[1740]: time="2024-02-09T19:15:21.546352800Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.550482 env[1740]: time="2024-02-09T19:15:21.550421707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.553932 env[1740]: time="2024-02-09T19:15:21.553867736Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:21.555769 env[1740]: time="2024-02-09T19:15:21.555706261Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 19:15:21.570975 env[1740]: time="2024-02-09T19:15:21.570927539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:15:22.875679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658636898.mount: Deactivated successfully. Feb 9 19:15:23.544249 env[1740]: time="2024-02-09T19:15:23.544170184Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:23.547332 env[1740]: time="2024-02-09T19:15:23.547270490Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:23.549933 env[1740]: time="2024-02-09T19:15:23.549881256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:23.552144 env[1740]: time="2024-02-09T19:15:23.552099914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:23.553064 env[1740]: time="2024-02-09T19:15:23.553020453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 19:15:23.569785 env[1740]: time="2024-02-09T19:15:23.569736247Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:15:24.043987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1929582809.mount: Deactivated successfully. Feb 9 19:15:24.054346 env[1740]: time="2024-02-09T19:15:24.054271673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.058494 env[1740]: time="2024-02-09T19:15:24.058431712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.062266 env[1740]: time="2024-02-09T19:15:24.062208824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.064991 env[1740]: time="2024-02-09T19:15:24.064941283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:24.066026 env[1740]: time="2024-02-09T19:15:24.065982570Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 19:15:24.081923 env[1740]: time="2024-02-09T19:15:24.081855741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:15:24.171962 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:15:24.172298 systemd[1]: Stopped kubelet.service. Feb 9 19:15:24.174971 systemd[1]: Started kubelet.service. Feb 9 19:15:24.265789 kubelet[2213]: E0209 19:15:24.265686 2213 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:24.273643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:24.273995 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:25.271487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827416586.mount: Deactivated successfully. Feb 9 19:15:28.409089 env[1740]: time="2024-02-09T19:15:28.409009209Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:28.413717 env[1740]: time="2024-02-09T19:15:28.413656015Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:28.415875 env[1740]: time="2024-02-09T19:15:28.415832101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:28.418959 env[1740]: time="2024-02-09T19:15:28.418900993Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:28.420988 env[1740]: time="2024-02-09T19:15:28.420924404Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 19:15:28.437833 env[1740]: time="2024-02-09T19:15:28.437762383Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:15:29.014332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341041892.mount: Deactivated successfully. Feb 9 19:15:29.745181 env[1740]: time="2024-02-09T19:15:29.745122818Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:29.748130 env[1740]: time="2024-02-09T19:15:29.748082141Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:29.750759 env[1740]: time="2024-02-09T19:15:29.750711363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:29.753041 env[1740]: time="2024-02-09T19:15:29.752995435Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:29.754287 env[1740]: time="2024-02-09T19:15:29.754235311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 19:15:32.593217 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 19:15:34.421899 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:15:34.422241 systemd[1]: Stopped kubelet.service. Feb 9 19:15:34.431253 systemd[1]: Started kubelet.service. Feb 9 19:15:34.532880 kubelet[2281]: E0209 19:15:34.532809 2281 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:15:34.537086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:15:34.537413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:15:37.223834 amazon-ssm-agent[1713]: 2024-02-09 19:15:37 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 19:15:38.381742 systemd[1]: Stopped kubelet.service. Feb 9 19:15:38.415102 systemd[1]: Reloading. Feb 9 19:15:38.538368 /usr/lib/systemd/system-generators/torcx-generator[2313]: time="2024-02-09T19:15:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:38.538432 /usr/lib/systemd/system-generators/torcx-generator[2313]: time="2024-02-09T19:15:38Z" level=info msg="torcx already run" Feb 9 19:15:38.694855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:38.694895 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:38.732831 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:38.953720 systemd[1]: Started kubelet.service. Feb 9 19:15:39.059976 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:39.060498 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:39.060785 kubelet[2366]: I0209 19:15:39.060733 2366 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:39.063183 kubelet[2366]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:39.063328 kubelet[2366]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:39.120665 amazon-ssm-agent[1713]: 2024-02-09 19:15:39 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 19:15:41.644370 kubelet[2366]: I0209 19:15:41.644331 2366 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:41.645026 kubelet[2366]: I0209 19:15:41.645001 2366 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:41.645480 kubelet[2366]: I0209 19:15:41.645456 2366 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:41.651345 kubelet[2366]: E0209 19:15:41.651297 2366 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.28.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.651511 kubelet[2366]: I0209 19:15:41.651373 2366 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:41.654795 kubelet[2366]: W0209 19:15:41.654768 2366 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:41.656138 kubelet[2366]: I0209 19:15:41.656115 2366 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:41.656802 kubelet[2366]: I0209 19:15:41.656780 2366 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:41.657029 kubelet[2366]: I0209 19:15:41.656995 2366 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:41.657249 kubelet[2366]: I0209 19:15:41.657213 2366 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:41.657406 kubelet[2366]: I0209 19:15:41.657385 2366 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:41.657707 kubelet[2366]: I0209 19:15:41.657687 2366 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:41.664030 kubelet[2366]: I0209 19:15:41.663421 2366 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:41.664272 kubelet[2366]: I0209 19:15:41.664250 2366 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:41.664476 kubelet[2366]: W0209 19:15:41.664135 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-78&limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.664652 kubelet[2366]: E0209 19:15:41.664631 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-78&limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.664793 kubelet[2366]: I0209 19:15:41.664772 2366 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:41.665652 kubelet[2366]: I0209 19:15:41.665614 2366 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:41.667168 kubelet[2366]: I0209 19:15:41.667121 2366 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:41.667818 kubelet[2366]: W0209 19:15:41.667781 2366 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:15:41.668528 kubelet[2366]: I0209 19:15:41.668481 2366 server.go:1186] "Started kubelet" Feb 9 19:15:41.668741 kubelet[2366]: W0209 19:15:41.668674 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.668834 kubelet[2366]: E0209 19:15:41.668752 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.673670 kubelet[2366]: E0209 19:15:41.673516 2366 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-28-78.17b247ca01f20922", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-28-78", UID:"ip-172-31-28-78", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-28-78"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 668444450, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 15, 41, 668444450, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.28.78:6443/api/v1/namespaces/default/events": dial tcp 172.31.28.78:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:15:41.674529 kubelet[2366]: I0209 19:15:41.674497 2366 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:41.674723 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:15:41.675660 kubelet[2366]: I0209 19:15:41.675613 2366 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:41.676218 kubelet[2366]: I0209 19:15:41.676191 2366 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:41.678038 kubelet[2366]: E0209 19:15:41.677999 2366 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:41.678261 kubelet[2366]: E0209 19:15:41.678239 2366 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:41.681274 kubelet[2366]: I0209 19:15:41.681212 2366 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:41.683178 kubelet[2366]: E0209 19:15:41.683133 2366 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.683430 kubelet[2366]: I0209 19:15:41.683405 2366 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:41.684408 kubelet[2366]: W0209 19:15:41.684346 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.28.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.684629 kubelet[2366]: E0209 19:15:41.684606 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.750471 kubelet[2366]: I0209 19:15:41.750436 2366 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:41.750736 kubelet[2366]: I0209 19:15:41.750714 2366 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:41.750871 kubelet[2366]: I0209 19:15:41.750850 2366 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:41.753957 kubelet[2366]: I0209 19:15:41.753921 2366 policy_none.go:49] "None policy: Start" Feb 9 19:15:41.755145 kubelet[2366]: I0209 19:15:41.755114 2366 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:41.755362 kubelet[2366]: I0209 19:15:41.755341 2366 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:41.764345 systemd[1]: Created slice kubepods.slice. Feb 9 19:15:41.775418 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:15:41.782336 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:15:41.786758 kubelet[2366]: I0209 19:15:41.786723 2366 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:41.787747 kubelet[2366]: E0209 19:15:41.787693 2366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.28.78:6443/api/v1/nodes\": dial tcp 172.31.28.78:6443: connect: connection refused" node="ip-172-31-28-78" Feb 9 19:15:41.792256 kubelet[2366]: I0209 19:15:41.792217 2366 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:41.793753 kubelet[2366]: I0209 19:15:41.793580 2366 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:41.794379 kubelet[2366]: E0209 19:15:41.794265 2366 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-78\" not found" Feb 9 19:15:41.808473 kubelet[2366]: I0209 19:15:41.808439 2366 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:41.850822 kubelet[2366]: I0209 19:15:41.850786 2366 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:41.851047 kubelet[2366]: I0209 19:15:41.851025 2366 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:41.851178 kubelet[2366]: I0209 19:15:41.851156 2366 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:41.851348 kubelet[2366]: E0209 19:15:41.851328 2366 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:15:41.852168 kubelet[2366]: W0209 19:15:41.852129 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.28.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.852431 kubelet[2366]: E0209 19:15:41.852408 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.883903 kubelet[2366]: E0209 19:15:41.883855 2366 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:41.952237 kubelet[2366]: I0209 19:15:41.952206 2366 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:41.954444 kubelet[2366]: I0209 19:15:41.954413 2366 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:41.956654 kubelet[2366]: I0209 19:15:41.956623 2366 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:41.957255 kubelet[2366]: I0209 19:15:41.957212 2366 status_manager.go:698] "Failed to get status for pod" podUID=fc41a96ae119d771e18e9ab5a3bf11c9 pod="kube-system/kube-apiserver-ip-172-31-28-78" err="Get \"https://172.31.28.78:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-28-78\": dial tcp 172.31.28.78:6443: connect: connection refused" Feb 9 19:15:41.964457 kubelet[2366]: I0209 19:15:41.964423 2366 status_manager.go:698] "Failed to get status for pod" podUID=a137becf45409eeddff2c6a42eed9ab1 pod="kube-system/kube-scheduler-ip-172-31-28-78" err="Get \"https://172.31.28.78:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-28-78\": dial tcp 172.31.28.78:6443: connect: connection refused" Feb 9 19:15:41.966212 kubelet[2366]: I0209 19:15:41.965448 2366 status_manager.go:698] "Failed to get status for pod" podUID=d480c9116c569199628d5f58b0e661d5 pod="kube-system/kube-controller-manager-ip-172-31-28-78" err="Get \"https://172.31.28.78:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-28-78\": dial tcp 172.31.28.78:6443: connect: connection refused" Feb 9 19:15:41.968464 systemd[1]: Created slice kubepods-burstable-podfc41a96ae119d771e18e9ab5a3bf11c9.slice. Feb 9 19:15:41.982623 systemd[1]: Created slice kubepods-burstable-poda137becf45409eeddff2c6a42eed9ab1.slice. Feb 9 19:15:41.988498 kubelet[2366]: I0209 19:15:41.988449 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:41.988668 kubelet[2366]: I0209 19:15:41.988532 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a137becf45409eeddff2c6a42eed9ab1-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-78\" (UID: \"a137becf45409eeddff2c6a42eed9ab1\") " pod="kube-system/kube-scheduler-ip-172-31-28-78" Feb 9 19:15:41.988668 kubelet[2366]: I0209 19:15:41.988610 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:41.988668 kubelet[2366]: I0209 19:15:41.988658 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:41.988882 kubelet[2366]: I0209 19:15:41.988708 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:41.988882 kubelet[2366]: I0209 19:15:41.988753 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:41.988882 kubelet[2366]: I0209 19:15:41.988814 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:41.988882 kubelet[2366]: I0209 19:15:41.988861 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:41.989108 kubelet[2366]: I0209 19:15:41.988913 2366 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:41.991596 kubelet[2366]: I0209 19:15:41.990639 2366 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:41.991835 kubelet[2366]: E0209 19:15:41.991802 2366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.28.78:6443/api/v1/nodes\": dial tcp 172.31.28.78:6443: connect: connection refused" node="ip-172-31-28-78" Feb 9 19:15:41.994222 systemd[1]: Created slice kubepods-burstable-podd480c9116c569199628d5f58b0e661d5.slice. Feb 9 19:15:42.280123 env[1740]: time="2024-02-09T19:15:42.278587032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-78,Uid:fc41a96ae119d771e18e9ab5a3bf11c9,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:42.285081 kubelet[2366]: E0209 19:15:42.285017 2366 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:42.291396 env[1740]: time="2024-02-09T19:15:42.291084928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-78,Uid:a137becf45409eeddff2c6a42eed9ab1,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:42.300239 env[1740]: time="2024-02-09T19:15:42.299830199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-78,Uid:d480c9116c569199628d5f58b0e661d5,Namespace:kube-system,Attempt:0,}" Feb 9 19:15:42.395588 kubelet[2366]: I0209 19:15:42.395513 2366 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:42.396227 kubelet[2366]: E0209 19:15:42.396195 2366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.28.78:6443/api/v1/nodes\": dial tcp 172.31.28.78:6443: connect: connection refused" node="ip-172-31-28-78" Feb 9 19:15:42.688927 kubelet[2366]: W0209 19:15:42.688845 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.28.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:42.688927 kubelet[2366]: E0209 19:15:42.688931 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.28.78:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:42.821054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435420085.mount: Deactivated successfully. Feb 9 19:15:42.829661 env[1740]: time="2024-02-09T19:15:42.829609237Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.836973 env[1740]: time="2024-02-09T19:15:42.836926843Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.840947 env[1740]: time="2024-02-09T19:15:42.840885172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.843937 env[1740]: time="2024-02-09T19:15:42.843877738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.845795 env[1740]: time="2024-02-09T19:15:42.845734374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.851185 env[1740]: time="2024-02-09T19:15:42.851131354Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.853080 env[1740]: time="2024-02-09T19:15:42.853021101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.861565 env[1740]: time="2024-02-09T19:15:42.861479139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.867317 env[1740]: time="2024-02-09T19:15:42.867243212Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.869571 env[1740]: time="2024-02-09T19:15:42.869516681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.871617 env[1740]: time="2024-02-09T19:15:42.871571181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.903994 env[1740]: time="2024-02-09T19:15:42.895516380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.903994 env[1740]: time="2024-02-09T19:15:42.895616194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.903994 env[1740]: time="2024-02-09T19:15:42.895642006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.903994 env[1740]: time="2024-02-09T19:15:42.896052440Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d11b9807cebe2af9c2c05d861e63423dbaf8b2fc81ed8815caf8c9c5150295f pid=2443 runtime=io.containerd.runc.v2 Feb 9 19:15:42.904772 env[1740]: time="2024-02-09T19:15:42.904716265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:15:42.952734 env[1740]: time="2024-02-09T19:15:42.951149871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.952734 env[1740]: time="2024-02-09T19:15:42.951248893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.952734 env[1740]: time="2024-02-09T19:15:42.951275642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.952734 env[1740]: time="2024-02-09T19:15:42.951604810Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1 pid=2480 runtime=io.containerd.runc.v2 Feb 9 19:15:42.956233 systemd[1]: Started cri-containerd-4d11b9807cebe2af9c2c05d861e63423dbaf8b2fc81ed8815caf8c9c5150295f.scope. Feb 9 19:15:42.970991 env[1740]: time="2024-02-09T19:15:42.970885584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:15:42.971244 env[1740]: time="2024-02-09T19:15:42.971193899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:15:42.971440 env[1740]: time="2024-02-09T19:15:42.971392183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:15:42.971946 env[1740]: time="2024-02-09T19:15:42.971821274Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37 pid=2464 runtime=io.containerd.runc.v2 Feb 9 19:15:43.012273 kubelet[2366]: W0209 19:15:43.012109 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.28.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.012273 kubelet[2366]: E0209 19:15:43.012215 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.28.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.015230 kubelet[2366]: W0209 19:15:43.015070 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.28.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.015230 kubelet[2366]: E0209 19:15:43.015151 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.28.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.025571 systemd[1]: Started cri-containerd-f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1.scope. Feb 9 19:15:43.039888 systemd[1]: Started cri-containerd-edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37.scope. Feb 9 19:15:43.086321 kubelet[2366]: E0209 19:15:43.085873 2366 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.117798 env[1740]: time="2024-02-09T19:15:43.117740064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-78,Uid:fc41a96ae119d771e18e9ab5a3bf11c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d11b9807cebe2af9c2c05d861e63423dbaf8b2fc81ed8815caf8c9c5150295f\"" Feb 9 19:15:43.127129 env[1740]: time="2024-02-09T19:15:43.127043865Z" level=info msg="CreateContainer within sandbox \"4d11b9807cebe2af9c2c05d861e63423dbaf8b2fc81ed8815caf8c9c5150295f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:15:43.167004 env[1740]: time="2024-02-09T19:15:43.166945647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-78,Uid:a137becf45409eeddff2c6a42eed9ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1\"" Feb 9 19:15:43.173592 env[1740]: time="2024-02-09T19:15:43.173473780Z" level=info msg="CreateContainer within sandbox \"f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:15:43.176821 env[1740]: time="2024-02-09T19:15:43.176756971Z" level=info msg="CreateContainer within sandbox \"4d11b9807cebe2af9c2c05d861e63423dbaf8b2fc81ed8815caf8c9c5150295f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4982de6cf5604fd53da394cd3554fbc02644132026753ba49a4eef6b10269714\"" Feb 9 19:15:43.178309 env[1740]: time="2024-02-09T19:15:43.178259251Z" level=info msg="StartContainer for \"4982de6cf5604fd53da394cd3554fbc02644132026753ba49a4eef6b10269714\"" Feb 9 19:15:43.189860 env[1740]: time="2024-02-09T19:15:43.189784989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-78,Uid:d480c9116c569199628d5f58b0e661d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37\"" Feb 9 19:15:43.199299 kubelet[2366]: W0209 19:15:43.199218 2366 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.28.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-78&limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.199470 kubelet[2366]: E0209 19:15:43.199306 2366 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.28.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-78&limit=500&resourceVersion=0": dial tcp 172.31.28.78:6443: connect: connection refused Feb 9 19:15:43.200152 env[1740]: time="2024-02-09T19:15:43.200046559Z" level=info msg="CreateContainer within sandbox \"edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:15:43.204260 kubelet[2366]: I0209 19:15:43.203667 2366 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:43.204260 kubelet[2366]: E0209 19:15:43.204110 2366 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.28.78:6443/api/v1/nodes\": dial tcp 172.31.28.78:6443: connect: connection refused" node="ip-172-31-28-78" Feb 9 19:15:43.208130 env[1740]: time="2024-02-09T19:15:43.208065688Z" level=info msg="CreateContainer within sandbox \"f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60\"" Feb 9 19:15:43.209245 env[1740]: time="2024-02-09T19:15:43.209180594Z" level=info msg="StartContainer for \"7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60\"" Feb 9 19:15:43.222691 env[1740]: time="2024-02-09T19:15:43.222623598Z" level=info msg="CreateContainer within sandbox \"edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f\"" Feb 9 19:15:43.223534 env[1740]: time="2024-02-09T19:15:43.223485109Z" level=info msg="StartContainer for \"a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f\"" Feb 9 19:15:43.243750 systemd[1]: Started cri-containerd-4982de6cf5604fd53da394cd3554fbc02644132026753ba49a4eef6b10269714.scope. Feb 9 19:15:43.274925 systemd[1]: Started cri-containerd-7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60.scope. Feb 9 19:15:43.298208 systemd[1]: Started cri-containerd-a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f.scope. Feb 9 19:15:43.402733 env[1740]: time="2024-02-09T19:15:43.402628746Z" level=info msg="StartContainer for \"4982de6cf5604fd53da394cd3554fbc02644132026753ba49a4eef6b10269714\" returns successfully" Feb 9 19:15:43.415407 env[1740]: time="2024-02-09T19:15:43.415315022Z" level=info msg="StartContainer for \"a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f\" returns successfully" Feb 9 19:15:43.485966 env[1740]: time="2024-02-09T19:15:43.485801926Z" level=info msg="StartContainer for \"7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60\" returns successfully" Feb 9 19:15:44.806127 kubelet[2366]: I0209 19:15:44.806083 2366 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:47.461672 update_engine[1729]: I0209 19:15:47.461611 1729 update_attempter.cc:509] Updating boot flags... Feb 9 19:15:47.668875 kubelet[2366]: I0209 19:15:47.668821 2366 apiserver.go:52] "Watching apiserver" Feb 9 19:15:47.758879 kubelet[2366]: E0209 19:15:47.756001 2366 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-78\" not found" node="ip-172-31-28-78" Feb 9 19:15:47.786176 kubelet[2366]: I0209 19:15:47.784370 2366 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:47.836578 kubelet[2366]: I0209 19:15:47.834130 2366 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:47.881104 kubelet[2366]: I0209 19:15:47.881064 2366 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-28-78" Feb 9 19:15:50.621753 systemd[1]: Reloading. Feb 9 19:15:50.761378 /usr/lib/systemd/system-generators/torcx-generator[2798]: time="2024-02-09T19:15:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:15:50.767699 /usr/lib/systemd/system-generators/torcx-generator[2798]: time="2024-02-09T19:15:50Z" level=info msg="torcx already run" Feb 9 19:15:50.928230 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:15:50.928262 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:15:50.980098 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:15:51.257451 kubelet[2366]: I0209 19:15:51.257154 2366 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:51.262989 systemd[1]: Stopping kubelet.service... Feb 9 19:15:51.282194 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:15:51.282628 systemd[1]: Stopped kubelet.service. Feb 9 19:15:51.282700 systemd[1]: kubelet.service: Consumed 3.317s CPU time. Feb 9 19:15:51.287255 systemd[1]: Started kubelet.service. Feb 9 19:15:51.427718 sudo[2861]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:15:51.428797 sudo[2861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:15:51.435259 kubelet[2851]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:51.435811 kubelet[2851]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:51.436146 kubelet[2851]: I0209 19:15:51.436090 2851 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:15:51.443568 kubelet[2851]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:15:51.443801 kubelet[2851]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:15:51.450149 kubelet[2851]: I0209 19:15:51.450091 2851 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:15:51.450149 kubelet[2851]: I0209 19:15:51.450136 2851 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:15:51.450473 kubelet[2851]: I0209 19:15:51.450446 2851 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:15:51.455988 kubelet[2851]: I0209 19:15:51.453196 2851 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:15:51.456465 kubelet[2851]: I0209 19:15:51.456434 2851 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:15:51.461017 kubelet[2851]: W0209 19:15:51.459008 2851 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 19:15:51.461017 kubelet[2851]: I0209 19:15:51.460224 2851 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:15:51.461017 kubelet[2851]: I0209 19:15:51.460601 2851 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:15:51.461017 kubelet[2851]: I0209 19:15:51.460705 2851 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:15:51.461017 kubelet[2851]: I0209 19:15:51.460742 2851 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:15:51.461017 kubelet[2851]: I0209 19:15:51.460765 2851 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:15:51.461587 kubelet[2851]: I0209 19:15:51.460814 2851 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:51.467204 kubelet[2851]: I0209 19:15:51.467161 2851 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:15:51.467204 kubelet[2851]: I0209 19:15:51.467208 2851 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:15:51.467430 kubelet[2851]: I0209 19:15:51.467257 2851 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:15:51.467430 kubelet[2851]: I0209 19:15:51.467283 2851 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:15:51.479012 kubelet[2851]: I0209 19:15:51.469210 2851 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:15:51.479012 kubelet[2851]: I0209 19:15:51.469956 2851 server.go:1186] "Started kubelet" Feb 9 19:15:51.479012 kubelet[2851]: I0209 19:15:51.475485 2851 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:15:51.485937 kubelet[2851]: E0209 19:15:51.485887 2851 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:15:51.485937 kubelet[2851]: E0209 19:15:51.485942 2851 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:15:51.489592 kubelet[2851]: I0209 19:15:51.489532 2851 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:15:51.509956 kubelet[2851]: I0209 19:15:51.509828 2851 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:15:51.532778 kubelet[2851]: I0209 19:15:51.499814 2851 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:15:51.532955 kubelet[2851]: I0209 19:15:51.499861 2851 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:15:51.664279 kubelet[2851]: I0209 19:15:51.658370 2851 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-28-78" Feb 9 19:15:51.706583 kubelet[2851]: I0209 19:15:51.706522 2851 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-28-78" Feb 9 19:15:51.706733 kubelet[2851]: I0209 19:15:51.706688 2851 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-28-78" Feb 9 19:15:51.815421 kubelet[2851]: I0209 19:15:51.815311 2851 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:15:51.834898 kubelet[2851]: I0209 19:15:51.834850 2851 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:15:51.834898 kubelet[2851]: I0209 19:15:51.834887 2851 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:15:51.835118 kubelet[2851]: I0209 19:15:51.834920 2851 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:15:51.835180 kubelet[2851]: I0209 19:15:51.835137 2851 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:15:51.835180 kubelet[2851]: I0209 19:15:51.835160 2851 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:15:51.835180 kubelet[2851]: I0209 19:15:51.835175 2851 policy_none.go:49] "None policy: Start" Feb 9 19:15:51.836365 kubelet[2851]: I0209 19:15:51.836323 2851 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:15:51.836489 kubelet[2851]: I0209 19:15:51.836373 2851 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:15:51.836708 kubelet[2851]: I0209 19:15:51.836673 2851 state_mem.go:75] "Updated machine memory state" Feb 9 19:15:51.848901 kubelet[2851]: I0209 19:15:51.848819 2851 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:15:51.851969 kubelet[2851]: I0209 19:15:51.851831 2851 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:15:51.879643 kubelet[2851]: I0209 19:15:51.879572 2851 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:15:51.879643 kubelet[2851]: I0209 19:15:51.879636 2851 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:15:51.879883 kubelet[2851]: I0209 19:15:51.879667 2851 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:15:51.879883 kubelet[2851]: E0209 19:15:51.879746 2851 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:15:51.980582 kubelet[2851]: I0209 19:15:51.980519 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:51.980762 kubelet[2851]: I0209 19:15:51.980702 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:51.980826 kubelet[2851]: I0209 19:15:51.980770 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:15:52.006256 kubelet[2851]: E0209 19:15:52.006207 2851 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-78\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:52.008842 kubelet[2851]: E0209 19:15:52.008789 2851 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-28-78\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.043762 kubelet[2851]: I0209 19:15:52.043706 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:52.043947 kubelet[2851]: I0209 19:15:52.043859 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.043947 kubelet[2851]: I0209 19:15:52.043937 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.044074 kubelet[2851]: I0209 19:15:52.044009 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.044074 kubelet[2851]: I0209 19:15:52.044060 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:52.044196 kubelet[2851]: I0209 19:15:52.044140 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fc41a96ae119d771e18e9ab5a3bf11c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-78\" (UID: \"fc41a96ae119d771e18e9ab5a3bf11c9\") " pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:52.044265 kubelet[2851]: I0209 19:15:52.044214 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.044326 kubelet[2851]: I0209 19:15:52.044288 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d480c9116c569199628d5f58b0e661d5-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-78\" (UID: \"d480c9116c569199628d5f58b0e661d5\") " pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.044395 kubelet[2851]: I0209 19:15:52.044358 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a137becf45409eeddff2c6a42eed9ab1-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-78\" (UID: \"a137becf45409eeddff2c6a42eed9ab1\") " pod="kube-system/kube-scheduler-ip-172-31-28-78" Feb 9 19:15:52.482232 kubelet[2851]: I0209 19:15:52.482181 2851 apiserver.go:52] "Watching apiserver" Feb 9 19:15:52.533633 kubelet[2851]: I0209 19:15:52.533574 2851 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:15:52.547950 kubelet[2851]: I0209 19:15:52.547901 2851 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:15:52.554895 sudo[2861]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:52.928827 kubelet[2851]: E0209 19:15:52.928769 2851 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-28-78\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-28-78" Feb 9 19:15:52.929832 kubelet[2851]: E0209 19:15:52.929797 2851 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-28-78\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-78" Feb 9 19:15:53.077204 kubelet[2851]: E0209 19:15:53.077142 2851 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-28-78\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-78" Feb 9 19:15:53.876426 kubelet[2851]: I0209 19:15:53.876378 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-78" podStartSLOduration=4.876297381 pod.CreationTimestamp="2024-02-09 19:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:53.494251844 +0000 UTC m=+2.197363735" watchObservedRunningTime="2024-02-09 19:15:53.876297381 +0000 UTC m=+2.579409236" Feb 9 19:15:54.280340 kubelet[2851]: I0209 19:15:54.280289 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-78" podStartSLOduration=3.280236436 pod.CreationTimestamp="2024-02-09 19:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:53.87845362 +0000 UTC m=+2.581565487" watchObservedRunningTime="2024-02-09 19:15:54.280236436 +0000 UTC m=+2.983348279" Feb 9 19:15:54.280519 kubelet[2851]: I0209 19:15:54.280483 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-78" podStartSLOduration=5.280450476 pod.CreationTimestamp="2024-02-09 19:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:15:54.279032056 +0000 UTC m=+2.982143923" watchObservedRunningTime="2024-02-09 19:15:54.280450476 +0000 UTC m=+2.983562319" Feb 9 19:15:55.682490 sudo[1964]: pam_unix(sudo:session): session closed for user root Feb 9 19:15:55.706905 sshd[1961]: pam_unix(sshd:session): session closed for user core Feb 9 19:15:55.711632 systemd[1]: sshd@4-172.31.28.78:22-147.75.109.163:43168.service: Deactivated successfully. Feb 9 19:15:55.713213 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:15:55.713953 systemd[1]: session-5.scope: Consumed 12.756s CPU time. Feb 9 19:15:55.715076 systemd-logind[1727]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:15:55.716996 systemd-logind[1727]: Removed session 5. Feb 9 19:16:03.793933 kubelet[2851]: I0209 19:16:03.793864 2851 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:16:03.794666 env[1740]: time="2024-02-09T19:16:03.794453129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:16:03.795390 kubelet[2851]: I0209 19:16:03.795357 2851 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:16:04.431020 kubelet[2851]: I0209 19:16:04.430961 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:04.443391 systemd[1]: Created slice kubepods-besteffort-pod0b951cdf_4000_4d92_ac4c_2bc5d44e9ac0.slice. Feb 9 19:16:04.448410 kubelet[2851]: W0209 19:16:04.448351 2851 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:16:04.448695 kubelet[2851]: E0209 19:16:04.448671 2851 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:16:04.477352 kubelet[2851]: I0209 19:16:04.477300 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:04.488794 systemd[1]: Created slice kubepods-burstable-pod8bb3f0fd_9880_4fde_bc41_4bbdfd76da6d.slice. Feb 9 19:16:04.520520 kubelet[2851]: I0209 19:16:04.520473 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0-kube-proxy\") pod \"kube-proxy-gnfhq\" (UID: \"0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0\") " pod="kube-system/kube-proxy-gnfhq" Feb 9 19:16:04.520831 kubelet[2851]: I0209 19:16:04.520807 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2l6l\" (UniqueName: \"kubernetes.io/projected/0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0-kube-api-access-v2l6l\") pod \"kube-proxy-gnfhq\" (UID: \"0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0\") " pod="kube-system/kube-proxy-gnfhq" Feb 9 19:16:04.521077 kubelet[2851]: I0209 19:16:04.521041 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-kernel\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.521277 kubelet[2851]: I0209 19:16:04.521257 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl2bz\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-kube-api-access-vl2bz\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.521498 kubelet[2851]: I0209 19:16:04.521469 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-run\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.521876 kubelet[2851]: I0209 19:16:04.521804 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-net\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.522257 kubelet[2851]: I0209 19:16:04.522232 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-cgroup\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.522987 kubelet[2851]: I0209 19:16:04.522946 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-lib-modules\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523297 kubelet[2851]: I0209 19:16:04.523273 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-config-path\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523668 kubelet[2851]: I0209 19:16:04.523616 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cni-path\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523841 kubelet[2851]: I0209 19:16:04.523704 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hubble-tls\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523841 kubelet[2851]: I0209 19:16:04.523790 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-clustermesh-secrets\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523988 kubelet[2851]: I0209 19:16:04.523861 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hostproc\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.523988 kubelet[2851]: I0209 19:16:04.523912 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0-xtables-lock\") pod \"kube-proxy-gnfhq\" (UID: \"0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0\") " pod="kube-system/kube-proxy-gnfhq" Feb 9 19:16:04.523988 kubelet[2851]: I0209 19:16:04.524024 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0-lib-modules\") pod \"kube-proxy-gnfhq\" (UID: \"0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0\") " pod="kube-system/kube-proxy-gnfhq" Feb 9 19:16:04.524250 kubelet[2851]: I0209 19:16:04.524072 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-bpf-maps\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.524250 kubelet[2851]: I0209 19:16:04.524144 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-etc-cni-netd\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.524250 kubelet[2851]: I0209 19:16:04.524214 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-xtables-lock\") pod \"cilium-h4r6h\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " pod="kube-system/cilium-h4r6h" Feb 9 19:16:04.797666 env[1740]: time="2024-02-09T19:16:04.796831732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4r6h,Uid:8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:04.828002 env[1740]: time="2024-02-09T19:16:04.827588531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:04.828002 env[1740]: time="2024-02-09T19:16:04.827680155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:04.828002 env[1740]: time="2024-02-09T19:16:04.827707304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:04.828398 env[1740]: time="2024-02-09T19:16:04.828294795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f pid=2959 runtime=io.containerd.runc.v2 Feb 9 19:16:04.882844 kubelet[2851]: I0209 19:16:04.881889 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:04.886800 systemd[1]: Started cri-containerd-2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f.scope. Feb 9 19:16:04.907495 systemd[1]: Created slice kubepods-besteffort-pod242f2d06_b76d_496c_bf36_838530801be6.slice. Feb 9 19:16:04.928008 kubelet[2851]: I0209 19:16:04.927951 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/242f2d06-b76d-496c-bf36-838530801be6-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-dxkk6\" (UID: \"242f2d06-b76d-496c-bf36-838530801be6\") " pod="kube-system/cilium-operator-f59cbd8c6-dxkk6" Feb 9 19:16:04.928401 kubelet[2851]: I0209 19:16:04.928378 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9bx\" (UniqueName: \"kubernetes.io/projected/242f2d06-b76d-496c-bf36-838530801be6-kube-api-access-tw9bx\") pod \"cilium-operator-f59cbd8c6-dxkk6\" (UID: \"242f2d06-b76d-496c-bf36-838530801be6\") " pod="kube-system/cilium-operator-f59cbd8c6-dxkk6" Feb 9 19:16:05.000985 env[1740]: time="2024-02-09T19:16:05.000919250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h4r6h,Uid:8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:16:05.005244 env[1740]: time="2024-02-09T19:16:05.005125334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:16:05.515023 env[1740]: time="2024-02-09T19:16:05.514957353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dxkk6,Uid:242f2d06-b76d-496c-bf36-838530801be6,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:05.541388 env[1740]: time="2024-02-09T19:16:05.541252683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:05.541779 env[1740]: time="2024-02-09T19:16:05.541712002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:05.542020 env[1740]: time="2024-02-09T19:16:05.541953867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:05.542861 env[1740]: time="2024-02-09T19:16:05.542659659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630 pid=3001 runtime=io.containerd.runc.v2 Feb 9 19:16:05.569861 systemd[1]: Started cri-containerd-463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630.scope. Feb 9 19:16:05.658006 env[1740]: time="2024-02-09T19:16:05.657931483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gnfhq,Uid:0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:05.673754 env[1740]: time="2024-02-09T19:16:05.673693589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dxkk6,Uid:242f2d06-b76d-496c-bf36-838530801be6,Namespace:kube-system,Attempt:0,} returns sandbox id \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:16:05.711524 env[1740]: time="2024-02-09T19:16:05.711234644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:05.711772 env[1740]: time="2024-02-09T19:16:05.711523197Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:05.711970 env[1740]: time="2024-02-09T19:16:05.711668662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:05.712289 env[1740]: time="2024-02-09T19:16:05.712199621Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8 pid=3043 runtime=io.containerd.runc.v2 Feb 9 19:16:05.745834 systemd[1]: run-containerd-runc-k8s.io-9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8-runc.OCGF4B.mount: Deactivated successfully. Feb 9 19:16:05.755924 systemd[1]: Started cri-containerd-9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8.scope. Feb 9 19:16:05.808742 env[1740]: time="2024-02-09T19:16:05.808393124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gnfhq,Uid:0b951cdf-4000-4d92-ac4c-2bc5d44e9ac0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8\"" Feb 9 19:16:05.818285 env[1740]: time="2024-02-09T19:16:05.818198172Z" level=info msg="CreateContainer within sandbox \"9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:16:05.847969 env[1740]: time="2024-02-09T19:16:05.847879351Z" level=info msg="CreateContainer within sandbox \"9bfdfb4c3118c2bc7ddd73189574e3b9e57215df9ef7e64aa0329e45f092fab8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31eff728b00865fcb91bb71af736db7ff5dc4a34ff5efea5d6646ded55f258ad\"" Feb 9 19:16:05.851935 env[1740]: time="2024-02-09T19:16:05.849829191Z" level=info msg="StartContainer for \"31eff728b00865fcb91bb71af736db7ff5dc4a34ff5efea5d6646ded55f258ad\"" Feb 9 19:16:05.895519 systemd[1]: Started cri-containerd-31eff728b00865fcb91bb71af736db7ff5dc4a34ff5efea5d6646ded55f258ad.scope. Feb 9 19:16:05.994245 env[1740]: time="2024-02-09T19:16:05.994135992Z" level=info msg="StartContainer for \"31eff728b00865fcb91bb71af736db7ff5dc4a34ff5efea5d6646ded55f258ad\" returns successfully" Feb 9 19:16:11.899830 kubelet[2851]: I0209 19:16:11.899734 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-gnfhq" podStartSLOduration=7.899678364 pod.CreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:07.035027437 +0000 UTC m=+15.738139316" watchObservedRunningTime="2024-02-09 19:16:11.899678364 +0000 UTC m=+20.602790255" Feb 9 19:16:12.961300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444582943.mount: Deactivated successfully. Feb 9 19:16:16.972618 env[1740]: time="2024-02-09T19:16:16.972362926Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:16.976285 env[1740]: time="2024-02-09T19:16:16.976211430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:16.979922 env[1740]: time="2024-02-09T19:16:16.979864669Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:16.981994 env[1740]: time="2024-02-09T19:16:16.981889926Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 19:16:16.984131 env[1740]: time="2024-02-09T19:16:16.984074359Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:16:16.991155 env[1740]: time="2024-02-09T19:16:16.991074271Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:16:17.023717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039395468.mount: Deactivated successfully. Feb 9 19:16:17.035118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount443056301.mount: Deactivated successfully. Feb 9 19:16:17.045109 env[1740]: time="2024-02-09T19:16:17.045041074Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\"" Feb 9 19:16:17.046579 env[1740]: time="2024-02-09T19:16:17.046496063Z" level=info msg="StartContainer for \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\"" Feb 9 19:16:17.086906 systemd[1]: Started cri-containerd-ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c.scope. Feb 9 19:16:17.161355 env[1740]: time="2024-02-09T19:16:17.161277525Z" level=info msg="StartContainer for \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\" returns successfully" Feb 9 19:16:17.182867 systemd[1]: cri-containerd-ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c.scope: Deactivated successfully. Feb 9 19:16:18.012322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c-rootfs.mount: Deactivated successfully. Feb 9 19:16:18.400533 env[1740]: time="2024-02-09T19:16:18.400299441Z" level=info msg="shim disconnected" id=ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c Feb 9 19:16:18.400533 env[1740]: time="2024-02-09T19:16:18.400368689Z" level=warning msg="cleaning up after shim disconnected" id=ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c namespace=k8s.io Feb 9 19:16:18.400533 env[1740]: time="2024-02-09T19:16:18.400391420Z" level=info msg="cleaning up dead shim" Feb 9 19:16:18.414906 env[1740]: time="2024-02-09T19:16:18.414831938Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3297 runtime=io.containerd.runc.v2\n" Feb 9 19:16:18.913064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515886877.mount: Deactivated successfully. Feb 9 19:16:19.057646 env[1740]: time="2024-02-09T19:16:19.057535808Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:16:19.114369 env[1740]: time="2024-02-09T19:16:19.114294545Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\"" Feb 9 19:16:19.118396 env[1740]: time="2024-02-09T19:16:19.118330111Z" level=info msg="StartContainer for \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\"" Feb 9 19:16:19.196661 systemd[1]: Started cri-containerd-08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6.scope. Feb 9 19:16:19.287832 env[1740]: time="2024-02-09T19:16:19.287767913Z" level=info msg="StartContainer for \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\" returns successfully" Feb 9 19:16:19.318420 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:16:19.320578 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:16:19.321975 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:16:19.330673 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:16:19.338882 systemd[1]: cri-containerd-08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6.scope: Deactivated successfully. Feb 9 19:16:19.358611 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:16:19.461062 env[1740]: time="2024-02-09T19:16:19.460871878Z" level=info msg="shim disconnected" id=08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6 Feb 9 19:16:19.461062 env[1740]: time="2024-02-09T19:16:19.460955432Z" level=warning msg="cleaning up after shim disconnected" id=08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6 namespace=k8s.io Feb 9 19:16:19.461062 env[1740]: time="2024-02-09T19:16:19.460978918Z" level=info msg="cleaning up dead shim" Feb 9 19:16:19.487151 env[1740]: time="2024-02-09T19:16:19.487080295Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3361 runtime=io.containerd.runc.v2\n" Feb 9 19:16:20.073593 env[1740]: time="2024-02-09T19:16:20.073283626Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:16:20.082608 systemd[1]: run-containerd-runc-k8s.io-08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6-runc.ReNqHX.mount: Deactivated successfully. Feb 9 19:16:20.082805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6-rootfs.mount: Deactivated successfully. Feb 9 19:16:20.141163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3326450489.mount: Deactivated successfully. Feb 9 19:16:20.167798 env[1740]: time="2024-02-09T19:16:20.167724101Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\"" Feb 9 19:16:20.169287 env[1740]: time="2024-02-09T19:16:20.169216788Z" level=info msg="StartContainer for \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\"" Feb 9 19:16:20.207455 env[1740]: time="2024-02-09T19:16:20.207385354Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:20.211355 env[1740]: time="2024-02-09T19:16:20.211297333Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:20.214828 env[1740]: time="2024-02-09T19:16:20.214750305Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:16:20.216462 env[1740]: time="2024-02-09T19:16:20.216383253Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 19:16:20.226632 env[1740]: time="2024-02-09T19:16:20.226520793Z" level=info msg="CreateContainer within sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:16:20.234672 systemd[1]: Started cri-containerd-401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a.scope. Feb 9 19:16:20.272273 env[1740]: time="2024-02-09T19:16:20.272155879Z" level=info msg="CreateContainer within sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\"" Feb 9 19:16:20.273520 env[1740]: time="2024-02-09T19:16:20.273417022Z" level=info msg="StartContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\"" Feb 9 19:16:20.325839 systemd[1]: Started cri-containerd-ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d.scope. Feb 9 19:16:20.343349 systemd[1]: cri-containerd-401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a.scope: Deactivated successfully. Feb 9 19:16:20.349598 env[1740]: time="2024-02-09T19:16:20.349440498Z" level=info msg="StartContainer for \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\" returns successfully" Feb 9 19:16:20.454035 env[1740]: time="2024-02-09T19:16:20.453353071Z" level=info msg="StartContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" returns successfully" Feb 9 19:16:20.597994 env[1740]: time="2024-02-09T19:16:20.597769816Z" level=info msg="shim disconnected" id=401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a Feb 9 19:16:20.597994 env[1740]: time="2024-02-09T19:16:20.597878441Z" level=warning msg="cleaning up after shim disconnected" id=401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a namespace=k8s.io Feb 9 19:16:20.597994 env[1740]: time="2024-02-09T19:16:20.597907533Z" level=info msg="cleaning up dead shim" Feb 9 19:16:20.623345 env[1740]: time="2024-02-09T19:16:20.623264669Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3460 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:16:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 19:16:21.074222 env[1740]: time="2024-02-09T19:16:21.074137070Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:16:21.084830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a-rootfs.mount: Deactivated successfully. Feb 9 19:16:21.108931 env[1740]: time="2024-02-09T19:16:21.108838576Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\"" Feb 9 19:16:21.110373 env[1740]: time="2024-02-09T19:16:21.110293227Z" level=info msg="StartContainer for \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\"" Feb 9 19:16:21.211253 systemd[1]: Started cri-containerd-1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063.scope. Feb 9 19:16:21.360260 env[1740]: time="2024-02-09T19:16:21.360074303Z" level=info msg="StartContainer for \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\" returns successfully" Feb 9 19:16:21.365990 systemd[1]: cri-containerd-1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063.scope: Deactivated successfully. Feb 9 19:16:21.373626 kubelet[2851]: I0209 19:16:21.373335 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-dxkk6" podStartSLOduration=-9.223372019481504e+09 pod.CreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="2024-02-09 19:16:05.676203721 +0000 UTC m=+14.379315564" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:21.231035307 +0000 UTC m=+29.934147198" watchObservedRunningTime="2024-02-09 19:16:21.373271323 +0000 UTC m=+30.076383190" Feb 9 19:16:21.455085 env[1740]: time="2024-02-09T19:16:21.455018175Z" level=info msg="shim disconnected" id=1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063 Feb 9 19:16:21.456087 env[1740]: time="2024-02-09T19:16:21.456028955Z" level=warning msg="cleaning up after shim disconnected" id=1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063 namespace=k8s.io Feb 9 19:16:21.456422 env[1740]: time="2024-02-09T19:16:21.456387545Z" level=info msg="cleaning up dead shim" Feb 9 19:16:21.483587 env[1740]: time="2024-02-09T19:16:21.483492439Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:16:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3521 runtime=io.containerd.runc.v2\n" Feb 9 19:16:22.082783 systemd[1]: run-containerd-runc-k8s.io-1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063-runc.wTIhDR.mount: Deactivated successfully. Feb 9 19:16:22.082969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063-rootfs.mount: Deactivated successfully. Feb 9 19:16:22.087771 env[1740]: time="2024-02-09T19:16:22.087490095Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:16:22.120869 env[1740]: time="2024-02-09T19:16:22.120783223Z" level=info msg="CreateContainer within sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\"" Feb 9 19:16:22.121724 env[1740]: time="2024-02-09T19:16:22.121670195Z" level=info msg="StartContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\"" Feb 9 19:16:22.192737 systemd[1]: Started cri-containerd-64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031.scope. Feb 9 19:16:22.426164 env[1740]: time="2024-02-09T19:16:22.426080454Z" level=info msg="StartContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" returns successfully" Feb 9 19:16:22.708605 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:16:22.725115 kubelet[2851]: I0209 19:16:22.724852 2851 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:16:22.763890 kubelet[2851]: I0209 19:16:22.763096 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:22.774292 systemd[1]: Created slice kubepods-burstable-podb4c4bdf2_8dba_4aa7_a324_aba965575f72.slice. Feb 9 19:16:22.781137 kubelet[2851]: I0209 19:16:22.781080 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:16:22.795955 systemd[1]: Created slice kubepods-burstable-podaf54b5c7_58c9_425b_8480_4d18609fb7d3.slice. Feb 9 19:16:22.870795 kubelet[2851]: I0209 19:16:22.870722 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4c4bdf2-8dba-4aa7-a324-aba965575f72-config-volume\") pod \"coredns-787d4945fb-7tscq\" (UID: \"b4c4bdf2-8dba-4aa7-a324-aba965575f72\") " pod="kube-system/coredns-787d4945fb-7tscq" Feb 9 19:16:22.870979 kubelet[2851]: I0209 19:16:22.870929 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95699\" (UniqueName: \"kubernetes.io/projected/af54b5c7-58c9-425b-8480-4d18609fb7d3-kube-api-access-95699\") pod \"coredns-787d4945fb-mm2l4\" (UID: \"af54b5c7-58c9-425b-8480-4d18609fb7d3\") " pod="kube-system/coredns-787d4945fb-mm2l4" Feb 9 19:16:22.871087 kubelet[2851]: I0209 19:16:22.871052 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqnz2\" (UniqueName: \"kubernetes.io/projected/b4c4bdf2-8dba-4aa7-a324-aba965575f72-kube-api-access-qqnz2\") pod \"coredns-787d4945fb-7tscq\" (UID: \"b4c4bdf2-8dba-4aa7-a324-aba965575f72\") " pod="kube-system/coredns-787d4945fb-7tscq" Feb 9 19:16:22.871211 kubelet[2851]: I0209 19:16:22.871160 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af54b5c7-58c9-425b-8480-4d18609fb7d3-config-volume\") pod \"coredns-787d4945fb-mm2l4\" (UID: \"af54b5c7-58c9-425b-8480-4d18609fb7d3\") " pod="kube-system/coredns-787d4945fb-mm2l4" Feb 9 19:16:23.090201 env[1740]: time="2024-02-09T19:16:23.090025241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7tscq,Uid:b4c4bdf2-8dba-4aa7-a324-aba965575f72,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:23.105295 env[1740]: time="2024-02-09T19:16:23.104617500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mm2l4,Uid:af54b5c7-58c9-425b-8480-4d18609fb7d3,Namespace:kube-system,Attempt:0,}" Feb 9 19:16:23.165760 kubelet[2851]: I0209 19:16:23.165121 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-h4r6h" podStartSLOduration=-9.22337201768971e+09 pod.CreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="2024-02-09 19:16:05.003831766 +0000 UTC m=+13.706943621" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:23.125241847 +0000 UTC m=+31.828353738" watchObservedRunningTime="2024-02-09 19:16:23.165064267 +0000 UTC m=+31.868176122" Feb 9 19:16:23.818599 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 19:16:25.638202 systemd-networkd[1537]: cilium_host: Link UP Feb 9 19:16:25.638937 (udev-worker)[3648]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:25.640037 (udev-worker)[3685]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:25.649741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:16:25.649874 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:16:25.642702 systemd-networkd[1537]: cilium_net: Link UP Feb 9 19:16:25.645344 systemd-networkd[1537]: cilium_net: Gained carrier Feb 9 19:16:25.648055 systemd-networkd[1537]: cilium_host: Gained carrier Feb 9 19:16:25.808129 (udev-worker)[3697]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:25.819105 systemd-networkd[1537]: cilium_vxlan: Link UP Feb 9 19:16:25.819124 systemd-networkd[1537]: cilium_vxlan: Gained carrier Feb 9 19:16:25.825946 systemd-networkd[1537]: cilium_host: Gained IPv6LL Feb 9 19:16:26.296588 kernel: NET: Registered PF_ALG protocol family Feb 9 19:16:26.344738 systemd-networkd[1537]: cilium_net: Gained IPv6LL Feb 9 19:16:27.560806 systemd-networkd[1537]: cilium_vxlan: Gained IPv6LL Feb 9 19:16:27.614119 systemd-networkd[1537]: lxc_health: Link UP Feb 9 19:16:27.646904 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:16:27.646372 systemd-networkd[1537]: lxc_health: Gained carrier Feb 9 19:16:28.252402 systemd-networkd[1537]: lxcc427db7b063a: Link UP Feb 9 19:16:28.257444 (udev-worker)[3698]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:16:28.275628 kernel: eth0: renamed from tmp7ccf4 Feb 9 19:16:28.281845 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc427db7b063a: link becomes ready Feb 9 19:16:28.279429 systemd-networkd[1537]: lxcc427db7b063a: Gained carrier Feb 9 19:16:28.286689 systemd-networkd[1537]: lxc63c9da02db3c: Link UP Feb 9 19:16:28.296680 kernel: eth0: renamed from tmp33ccb Feb 9 19:16:28.310713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc63c9da02db3c: link becomes ready Feb 9 19:16:28.309349 systemd-networkd[1537]: lxc63c9da02db3c: Gained carrier Feb 9 19:16:29.096788 systemd-networkd[1537]: lxc_health: Gained IPv6LL Feb 9 19:16:29.800732 systemd-networkd[1537]: lxc63c9da02db3c: Gained IPv6LL Feb 9 19:16:29.801179 systemd-networkd[1537]: lxcc427db7b063a: Gained IPv6LL Feb 9 19:16:37.043417 env[1740]: time="2024-02-09T19:16:37.043270727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:37.044130 env[1740]: time="2024-02-09T19:16:37.043418371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:37.044130 env[1740]: time="2024-02-09T19:16:37.043486847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:37.044130 env[1740]: time="2024-02-09T19:16:37.043936847Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001 pid=4061 runtime=io.containerd.runc.v2 Feb 9 19:16:37.072388 env[1740]: time="2024-02-09T19:16:37.071542785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:16:37.072388 env[1740]: time="2024-02-09T19:16:37.071805713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:16:37.072388 env[1740]: time="2024-02-09T19:16:37.071987913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:16:37.073660 env[1740]: time="2024-02-09T19:16:37.073261311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33ccba788ec7cb0941264ab439969b7903de2606411a41d7ff052f88f76b43e3 pid=4076 runtime=io.containerd.runc.v2 Feb 9 19:16:37.100713 systemd[1]: run-containerd-runc-k8s.io-7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001-runc.mDAhQ9.mount: Deactivated successfully. Feb 9 19:16:37.108827 systemd[1]: Started cri-containerd-7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001.scope. Feb 9 19:16:37.162570 systemd[1]: Started cri-containerd-33ccba788ec7cb0941264ab439969b7903de2606411a41d7ff052f88f76b43e3.scope. Feb 9 19:16:37.264182 env[1740]: time="2024-02-09T19:16:37.264110411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mm2l4,Uid:af54b5c7-58c9-425b-8480-4d18609fb7d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001\"" Feb 9 19:16:37.273961 env[1740]: time="2024-02-09T19:16:37.273898579Z" level=info msg="CreateContainer within sandbox \"7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:37.312318 env[1740]: time="2024-02-09T19:16:37.308480526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-7tscq,Uid:b4c4bdf2-8dba-4aa7-a324-aba965575f72,Namespace:kube-system,Attempt:0,} returns sandbox id \"33ccba788ec7cb0941264ab439969b7903de2606411a41d7ff052f88f76b43e3\"" Feb 9 19:16:37.312318 env[1740]: time="2024-02-09T19:16:37.312069493Z" level=info msg="CreateContainer within sandbox \"7ccf4691ca36d2b9e47c9d2292cf3b0fd6e273debbc1aa75bdb30bb375012001\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2428e5f9f19fc74c8d5bbcf6d043fd3eb16c82468a904d6a43c00f791e8ecc52\"" Feb 9 19:16:37.315769 env[1740]: time="2024-02-09T19:16:37.314037364Z" level=info msg="CreateContainer within sandbox \"33ccba788ec7cb0941264ab439969b7903de2606411a41d7ff052f88f76b43e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:16:37.315769 env[1740]: time="2024-02-09T19:16:37.314492235Z" level=info msg="StartContainer for \"2428e5f9f19fc74c8d5bbcf6d043fd3eb16c82468a904d6a43c00f791e8ecc52\"" Feb 9 19:16:37.350076 env[1740]: time="2024-02-09T19:16:37.349986323Z" level=info msg="CreateContainer within sandbox \"33ccba788ec7cb0941264ab439969b7903de2606411a41d7ff052f88f76b43e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3431e9187fd3d97b19080e10317d833b3a6f432fd9b5446006fd8ce4c78772a\"" Feb 9 19:16:37.351054 env[1740]: time="2024-02-09T19:16:37.351002229Z" level=info msg="StartContainer for \"f3431e9187fd3d97b19080e10317d833b3a6f432fd9b5446006fd8ce4c78772a\"" Feb 9 19:16:37.357181 systemd[1]: Started cri-containerd-2428e5f9f19fc74c8d5bbcf6d043fd3eb16c82468a904d6a43c00f791e8ecc52.scope. Feb 9 19:16:37.420200 systemd[1]: Started cri-containerd-f3431e9187fd3d97b19080e10317d833b3a6f432fd9b5446006fd8ce4c78772a.scope. Feb 9 19:16:37.471760 env[1740]: time="2024-02-09T19:16:37.471691003Z" level=info msg="StartContainer for \"2428e5f9f19fc74c8d5bbcf6d043fd3eb16c82468a904d6a43c00f791e8ecc52\" returns successfully" Feb 9 19:16:37.533745 env[1740]: time="2024-02-09T19:16:37.533679554Z" level=info msg="StartContainer for \"f3431e9187fd3d97b19080e10317d833b3a6f432fd9b5446006fd8ce4c78772a\" returns successfully" Feb 9 19:16:38.177160 kubelet[2851]: I0209 19:16:38.177108 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-7tscq" podStartSLOduration=34.177053486 pod.CreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:38.157764391 +0000 UTC m=+46.860876258" watchObservedRunningTime="2024-02-09 19:16:38.177053486 +0000 UTC m=+46.880165341" Feb 9 19:16:38.223225 kubelet[2851]: I0209 19:16:38.223176 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-mm2l4" podStartSLOduration=34.223107757 pod.CreationTimestamp="2024-02-09 19:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:16:38.200529741 +0000 UTC m=+46.903641608" watchObservedRunningTime="2024-02-09 19:16:38.223107757 +0000 UTC m=+46.926219612" Feb 9 19:16:53.362858 systemd[1]: Started sshd@5-172.31.28.78:22-147.75.109.163:55254.service. Feb 9 19:16:53.539134 sshd[4265]: Accepted publickey for core from 147.75.109.163 port 55254 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:53.542600 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:53.551981 systemd[1]: Started session-6.scope. Feb 9 19:16:53.554075 systemd-logind[1727]: New session 6 of user core. Feb 9 19:16:53.847201 sshd[4265]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:53.853026 systemd[1]: sshd@5-172.31.28.78:22-147.75.109.163:55254.service: Deactivated successfully. Feb 9 19:16:53.855915 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:16:53.858654 systemd-logind[1727]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:16:53.860821 systemd-logind[1727]: Removed session 6. Feb 9 19:16:58.878172 systemd[1]: Started sshd@6-172.31.28.78:22-147.75.109.163:43042.service. Feb 9 19:16:59.048239 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 43042 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:16:59.051457 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:16:59.060405 systemd[1]: Started session-7.scope. Feb 9 19:16:59.061782 systemd-logind[1727]: New session 7 of user core. Feb 9 19:16:59.314040 sshd[4278]: pam_unix(sshd:session): session closed for user core Feb 9 19:16:59.319873 systemd[1]: sshd@6-172.31.28.78:22-147.75.109.163:43042.service: Deactivated successfully. Feb 9 19:16:59.321241 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:16:59.323937 systemd-logind[1727]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:16:59.326122 systemd-logind[1727]: Removed session 7. Feb 9 19:17:04.344017 systemd[1]: Started sshd@7-172.31.28.78:22-147.75.109.163:43050.service. Feb 9 19:17:04.515253 sshd[4293]: Accepted publickey for core from 147.75.109.163 port 43050 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:04.518876 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:04.527677 systemd-logind[1727]: New session 8 of user core. Feb 9 19:17:04.529750 systemd[1]: Started session-8.scope. Feb 9 19:17:04.791225 sshd[4293]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:04.798041 systemd[1]: sshd@7-172.31.28.78:22-147.75.109.163:43050.service: Deactivated successfully. Feb 9 19:17:04.800403 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:17:04.803412 systemd-logind[1727]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:17:04.806343 systemd-logind[1727]: Removed session 8. Feb 9 19:17:09.821508 systemd[1]: Started sshd@8-172.31.28.78:22-147.75.109.163:39218.service. Feb 9 19:17:09.996603 sshd[4307]: Accepted publickey for core from 147.75.109.163 port 39218 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:09.999103 sshd[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:10.007720 systemd-logind[1727]: New session 9 of user core. Feb 9 19:17:10.008765 systemd[1]: Started session-9.scope. Feb 9 19:17:10.264634 sshd[4307]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:10.269503 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:17:10.270757 systemd[1]: sshd@8-172.31.28.78:22-147.75.109.163:39218.service: Deactivated successfully. Feb 9 19:17:10.272500 systemd-logind[1727]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:17:10.274437 systemd-logind[1727]: Removed session 9. Feb 9 19:17:15.292783 systemd[1]: Started sshd@9-172.31.28.78:22-147.75.109.163:36950.service. Feb 9 19:17:15.464339 sshd[4320]: Accepted publickey for core from 147.75.109.163 port 36950 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:15.467006 sshd[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:15.475729 systemd-logind[1727]: New session 10 of user core. Feb 9 19:17:15.477188 systemd[1]: Started session-10.scope. Feb 9 19:17:15.715014 sshd[4320]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:15.720677 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:17:15.721878 systemd[1]: sshd@9-172.31.28.78:22-147.75.109.163:36950.service: Deactivated successfully. Feb 9 19:17:15.723875 systemd-logind[1727]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:17:15.725880 systemd-logind[1727]: Removed session 10. Feb 9 19:17:15.745575 systemd[1]: Started sshd@10-172.31.28.78:22-147.75.109.163:36954.service. Feb 9 19:17:15.913578 sshd[4333]: Accepted publickey for core from 147.75.109.163 port 36954 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:15.916090 sshd[4333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:15.924153 systemd-logind[1727]: New session 11 of user core. Feb 9 19:17:15.925121 systemd[1]: Started session-11.scope. Feb 9 19:17:17.683157 sshd[4333]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:17.689298 systemd[1]: sshd@10-172.31.28.78:22-147.75.109.163:36954.service: Deactivated successfully. Feb 9 19:17:17.690698 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:17:17.692829 systemd-logind[1727]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:17:17.695261 systemd-logind[1727]: Removed session 11. Feb 9 19:17:17.712187 systemd[1]: Started sshd@11-172.31.28.78:22-147.75.109.163:36958.service. Feb 9 19:17:17.888437 sshd[4343]: Accepted publickey for core from 147.75.109.163 port 36958 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:17.891029 sshd[4343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:17.900221 systemd[1]: Started session-12.scope. Feb 9 19:17:17.902674 systemd-logind[1727]: New session 12 of user core. Feb 9 19:17:18.148689 sshd[4343]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:18.155538 systemd[1]: sshd@11-172.31.28.78:22-147.75.109.163:36958.service: Deactivated successfully. Feb 9 19:17:18.156916 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:17:18.158852 systemd-logind[1727]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:17:18.160513 systemd-logind[1727]: Removed session 12. Feb 9 19:17:23.179950 systemd[1]: Started sshd@12-172.31.28.78:22-147.75.109.163:36966.service. Feb 9 19:17:23.347029 sshd[4355]: Accepted publickey for core from 147.75.109.163 port 36966 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:23.350254 sshd[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:23.359500 systemd[1]: Started session-13.scope. Feb 9 19:17:23.360147 systemd-logind[1727]: New session 13 of user core. Feb 9 19:17:23.614010 sshd[4355]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:23.621471 systemd-logind[1727]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:17:23.622352 systemd[1]: sshd@12-172.31.28.78:22-147.75.109.163:36966.service: Deactivated successfully. Feb 9 19:17:23.623935 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:17:23.627493 systemd-logind[1727]: Removed session 13. Feb 9 19:17:28.642103 systemd[1]: Started sshd@13-172.31.28.78:22-147.75.109.163:43920.service. Feb 9 19:17:28.809347 sshd[4367]: Accepted publickey for core from 147.75.109.163 port 43920 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:28.811989 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:28.820063 systemd-logind[1727]: New session 14 of user core. Feb 9 19:17:28.821033 systemd[1]: Started session-14.scope. Feb 9 19:17:29.066171 sshd[4367]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:29.070994 systemd-logind[1727]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:17:29.071585 systemd[1]: sshd@13-172.31.28.78:22-147.75.109.163:43920.service: Deactivated successfully. Feb 9 19:17:29.072953 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:17:29.074718 systemd-logind[1727]: Removed session 14. Feb 9 19:17:34.103027 systemd[1]: Started sshd@14-172.31.28.78:22-147.75.109.163:43928.service. Feb 9 19:17:34.277805 sshd[4379]: Accepted publickey for core from 147.75.109.163 port 43928 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:34.280389 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:34.290176 systemd[1]: Started session-15.scope. Feb 9 19:17:34.291683 systemd-logind[1727]: New session 15 of user core. Feb 9 19:17:34.548455 sshd[4379]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:34.554107 systemd[1]: sshd@14-172.31.28.78:22-147.75.109.163:43928.service: Deactivated successfully. Feb 9 19:17:34.555534 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:17:34.558039 systemd-logind[1727]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:17:34.560480 systemd-logind[1727]: Removed session 15. Feb 9 19:17:34.579872 systemd[1]: Started sshd@15-172.31.28.78:22-147.75.109.163:50810.service. Feb 9 19:17:34.747942 sshd[4391]: Accepted publickey for core from 147.75.109.163 port 50810 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:34.751126 sshd[4391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:34.759366 systemd-logind[1727]: New session 16 of user core. Feb 9 19:17:34.760435 systemd[1]: Started session-16.scope. Feb 9 19:17:35.061949 sshd[4391]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:35.067502 systemd[1]: sshd@15-172.31.28.78:22-147.75.109.163:50810.service: Deactivated successfully. Feb 9 19:17:35.069062 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:17:35.070534 systemd-logind[1727]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:17:35.073377 systemd-logind[1727]: Removed session 16. Feb 9 19:17:35.094529 systemd[1]: Started sshd@16-172.31.28.78:22-147.75.109.163:50814.service. Feb 9 19:17:35.266608 sshd[4401]: Accepted publickey for core from 147.75.109.163 port 50814 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:35.269603 sshd[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:35.279762 systemd[1]: Started session-17.scope. Feb 9 19:17:35.281715 systemd-logind[1727]: New session 17 of user core. Feb 9 19:17:36.817647 sshd[4401]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:36.825095 systemd[1]: sshd@16-172.31.28.78:22-147.75.109.163:50814.service: Deactivated successfully. Feb 9 19:17:36.826652 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:17:36.829744 systemd-logind[1727]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:17:36.832260 systemd-logind[1727]: Removed session 17. Feb 9 19:17:36.850782 systemd[1]: Started sshd@17-172.31.28.78:22-147.75.109.163:50822.service. Feb 9 19:17:37.020977 sshd[4423]: Accepted publickey for core from 147.75.109.163 port 50822 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:37.023648 sshd[4423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:37.031980 systemd-logind[1727]: New session 18 of user core. Feb 9 19:17:37.035522 systemd[1]: Started session-18.scope. Feb 9 19:17:37.488065 sshd[4423]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:37.493933 systemd-logind[1727]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:17:37.494576 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:17:37.497185 systemd-logind[1727]: Removed session 18. Feb 9 19:17:37.498152 systemd[1]: sshd@17-172.31.28.78:22-147.75.109.163:50822.service: Deactivated successfully. Feb 9 19:17:37.517577 systemd[1]: Started sshd@18-172.31.28.78:22-147.75.109.163:50826.service. Feb 9 19:17:37.688132 sshd[4478]: Accepted publickey for core from 147.75.109.163 port 50826 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:37.691498 sshd[4478]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:37.700480 systemd[1]: Started session-19.scope. Feb 9 19:17:37.701831 systemd-logind[1727]: New session 19 of user core. Feb 9 19:17:37.952846 sshd[4478]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:37.958469 systemd-logind[1727]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:17:37.958903 systemd[1]: sshd@18-172.31.28.78:22-147.75.109.163:50826.service: Deactivated successfully. Feb 9 19:17:37.960290 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:17:37.962121 systemd-logind[1727]: Removed session 19. Feb 9 19:17:42.982069 systemd[1]: Started sshd@19-172.31.28.78:22-147.75.109.163:50832.service. Feb 9 19:17:43.151256 sshd[4490]: Accepted publickey for core from 147.75.109.163 port 50832 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:43.153755 sshd[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:43.162974 systemd[1]: Started session-20.scope. Feb 9 19:17:43.164434 systemd-logind[1727]: New session 20 of user core. Feb 9 19:17:43.408091 sshd[4490]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:43.413499 systemd-logind[1727]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:17:43.414114 systemd[1]: sshd@19-172.31.28.78:22-147.75.109.163:50832.service: Deactivated successfully. Feb 9 19:17:43.415470 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:17:43.418283 systemd-logind[1727]: Removed session 20. Feb 9 19:17:48.440398 systemd[1]: Started sshd@20-172.31.28.78:22-147.75.109.163:56408.service. Feb 9 19:17:48.619339 sshd[4529]: Accepted publickey for core from 147.75.109.163 port 56408 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:48.621188 sshd[4529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:48.629386 systemd-logind[1727]: New session 21 of user core. Feb 9 19:17:48.630330 systemd[1]: Started session-21.scope. Feb 9 19:17:48.874225 sshd[4529]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:48.879856 systemd-logind[1727]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:17:48.880455 systemd[1]: sshd@20-172.31.28.78:22-147.75.109.163:56408.service: Deactivated successfully. Feb 9 19:17:48.881883 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:17:48.883790 systemd-logind[1727]: Removed session 21. Feb 9 19:17:53.904664 systemd[1]: Started sshd@21-172.31.28.78:22-147.75.109.163:56422.service. Feb 9 19:17:54.083041 sshd[4543]: Accepted publickey for core from 147.75.109.163 port 56422 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:54.086104 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:54.095317 systemd[1]: Started session-22.scope. Feb 9 19:17:54.096706 systemd-logind[1727]: New session 22 of user core. Feb 9 19:17:54.345794 sshd[4543]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:54.354181 systemd[1]: sshd@21-172.31.28.78:22-147.75.109.163:56422.service: Deactivated successfully. Feb 9 19:17:54.355516 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:17:54.355945 systemd-logind[1727]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:17:54.358185 systemd-logind[1727]: Removed session 22. Feb 9 19:17:59.373382 systemd[1]: Started sshd@22-172.31.28.78:22-147.75.109.163:42782.service. Feb 9 19:17:59.540724 sshd[4555]: Accepted publickey for core from 147.75.109.163 port 42782 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:17:59.543961 sshd[4555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:17:59.552825 systemd[1]: Started session-23.scope. Feb 9 19:17:59.552891 systemd-logind[1727]: New session 23 of user core. Feb 9 19:17:59.794892 sshd[4555]: pam_unix(sshd:session): session closed for user core Feb 9 19:17:59.799830 systemd-logind[1727]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:17:59.800440 systemd[1]: sshd@22-172.31.28.78:22-147.75.109.163:42782.service: Deactivated successfully. Feb 9 19:17:59.801904 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:17:59.804146 systemd-logind[1727]: Removed session 23. Feb 9 19:17:59.822321 systemd[1]: Started sshd@23-172.31.28.78:22-147.75.109.163:42786.service. Feb 9 19:17:59.998292 sshd[4567]: Accepted publickey for core from 147.75.109.163 port 42786 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:00.001006 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:00.009220 systemd-logind[1727]: New session 24 of user core. Feb 9 19:18:00.010189 systemd[1]: Started session-24.scope. Feb 9 19:18:02.397867 env[1740]: time="2024-02-09T19:18:02.397809527Z" level=info msg="StopContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" with timeout 30 (s)" Feb 9 19:18:02.401857 env[1740]: time="2024-02-09T19:18:02.401798941Z" level=info msg="Stop container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" with signal terminated" Feb 9 19:18:02.435488 systemd[1]: cri-containerd-ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d.scope: Deactivated successfully. Feb 9 19:18:02.469996 env[1740]: time="2024-02-09T19:18:02.469860745Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:18:02.505384 env[1740]: time="2024-02-09T19:18:02.505305452Z" level=info msg="StopContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" with timeout 1 (s)" Feb 9 19:18:02.508800 env[1740]: time="2024-02-09T19:18:02.508731473Z" level=info msg="Stop container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" with signal terminated" Feb 9 19:18:02.528071 systemd-networkd[1537]: lxc_health: Link DOWN Feb 9 19:18:02.528806 systemd-networkd[1537]: lxc_health: Lost carrier Feb 9 19:18:02.548229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d-rootfs.mount: Deactivated successfully. Feb 9 19:18:02.567134 systemd[1]: cri-containerd-64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031.scope: Deactivated successfully. Feb 9 19:18:02.567717 systemd[1]: cri-containerd-64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031.scope: Consumed 14.846s CPU time. Feb 9 19:18:02.581875 env[1740]: time="2024-02-09T19:18:02.581637027Z" level=info msg="shim disconnected" id=ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d Feb 9 19:18:02.582359 env[1740]: time="2024-02-09T19:18:02.582323354Z" level=warning msg="cleaning up after shim disconnected" id=ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d namespace=k8s.io Feb 9 19:18:02.582657 env[1740]: time="2024-02-09T19:18:02.582614933Z" level=info msg="cleaning up dead shim" Feb 9 19:18:02.599177 env[1740]: time="2024-02-09T19:18:02.599098661Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4624 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.603663 env[1740]: time="2024-02-09T19:18:02.603594429Z" level=info msg="StopContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" returns successfully" Feb 9 19:18:02.604687 env[1740]: time="2024-02-09T19:18:02.604638087Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:18:02.605060 env[1740]: time="2024-02-09T19:18:02.605020006Z" level=info msg="Container to stop \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.611739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630-shm.mount: Deactivated successfully. Feb 9 19:18:02.632384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031-rootfs.mount: Deactivated successfully. Feb 9 19:18:02.642074 systemd[1]: cri-containerd-463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630.scope: Deactivated successfully. Feb 9 19:18:02.646366 env[1740]: time="2024-02-09T19:18:02.646279866Z" level=info msg="shim disconnected" id=64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031 Feb 9 19:18:02.647114 env[1740]: time="2024-02-09T19:18:02.647067526Z" level=warning msg="cleaning up after shim disconnected" id=64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031 namespace=k8s.io Feb 9 19:18:02.647240 env[1740]: time="2024-02-09T19:18:02.647130590Z" level=info msg="cleaning up dead shim" Feb 9 19:18:02.667359 env[1740]: time="2024-02-09T19:18:02.665478613Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4653 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.669171 env[1740]: time="2024-02-09T19:18:02.669118738Z" level=info msg="StopContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" returns successfully" Feb 9 19:18:02.670789 env[1740]: time="2024-02-09T19:18:02.670702819Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:18:02.670950 env[1740]: time="2024-02-09T19:18:02.670815589Z" level=info msg="Container to stop \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.670950 env[1740]: time="2024-02-09T19:18:02.670847067Z" level=info msg="Container to stop \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.670950 env[1740]: time="2024-02-09T19:18:02.670882109Z" level=info msg="Container to stop \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.670950 env[1740]: time="2024-02-09T19:18:02.670913526Z" level=info msg="Container to stop \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.670950 env[1740]: time="2024-02-09T19:18:02.670939916Z" level=info msg="Container to stop \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:02.677007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f-shm.mount: Deactivated successfully. Feb 9 19:18:02.699438 systemd[1]: cri-containerd-2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f.scope: Deactivated successfully. Feb 9 19:18:02.709326 env[1740]: time="2024-02-09T19:18:02.709236786Z" level=info msg="shim disconnected" id=463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630 Feb 9 19:18:02.709326 env[1740]: time="2024-02-09T19:18:02.709313878Z" level=warning msg="cleaning up after shim disconnected" id=463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630 namespace=k8s.io Feb 9 19:18:02.709742 env[1740]: time="2024-02-09T19:18:02.709336631Z" level=info msg="cleaning up dead shim" Feb 9 19:18:02.737476 env[1740]: time="2024-02-09T19:18:02.737408097Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4685 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.739033 env[1740]: time="2024-02-09T19:18:02.738953705Z" level=info msg="TearDown network for sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" successfully" Feb 9 19:18:02.739033 env[1740]: time="2024-02-09T19:18:02.739019312Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" returns successfully" Feb 9 19:18:02.749177 env[1740]: time="2024-02-09T19:18:02.749101117Z" level=info msg="shim disconnected" id=2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f Feb 9 19:18:02.749453 env[1740]: time="2024-02-09T19:18:02.749186130Z" level=warning msg="cleaning up after shim disconnected" id=2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f namespace=k8s.io Feb 9 19:18:02.749453 env[1740]: time="2024-02-09T19:18:02.749209183Z" level=info msg="cleaning up dead shim" Feb 9 19:18:02.765370 env[1740]: time="2024-02-09T19:18:02.765295018Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4709 runtime=io.containerd.runc.v2\n" Feb 9 19:18:02.765959 env[1740]: time="2024-02-09T19:18:02.765909749Z" level=info msg="TearDown network for sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" successfully" Feb 9 19:18:02.766104 env[1740]: time="2024-02-09T19:18:02.765958916Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" returns successfully" Feb 9 19:18:02.830504 kubelet[2851]: I0209 19:18:02.830465 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-kernel\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.831411 kubelet[2851]: I0209 19:18:02.831239 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-net\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.831411 kubelet[2851]: I0209 19:18:02.830498 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.831411 kubelet[2851]: I0209 19:18:02.831332 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/242f2d06-b76d-496c-bf36-838530801be6-cilium-config-path\") pod \"242f2d06-b76d-496c-bf36-838530801be6\" (UID: \"242f2d06-b76d-496c-bf36-838530801be6\") " Feb 9 19:18:02.831411 kubelet[2851]: I0209 19:18:02.831382 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-config-path\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.831730 kubelet[2851]: W0209 19:18:02.831612 2851 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/242f2d06-b76d-496c-bf36-838530801be6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:18:02.832162 kubelet[2851]: I0209 19:18:02.831859 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-clustermesh-secrets\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832162 kubelet[2851]: I0209 19:18:02.831934 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-lib-modules\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832162 kubelet[2851]: I0209 19:18:02.832004 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-cgroup\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832162 kubelet[2851]: I0209 19:18:02.832045 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hostproc\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832162 kubelet[2851]: I0209 19:18:02.832115 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl2bz\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-kube-api-access-vl2bz\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832917 kubelet[2851]: I0209 19:18:02.832571 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-bpf-maps\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832917 kubelet[2851]: I0209 19:18:02.832672 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hubble-tls\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832917 kubelet[2851]: I0209 19:18:02.832744 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw9bx\" (UniqueName: \"kubernetes.io/projected/242f2d06-b76d-496c-bf36-838530801be6-kube-api-access-tw9bx\") pod \"242f2d06-b76d-496c-bf36-838530801be6\" (UID: \"242f2d06-b76d-496c-bf36-838530801be6\") " Feb 9 19:18:02.832917 kubelet[2851]: I0209 19:18:02.832793 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-xtables-lock\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.832917 kubelet[2851]: I0209 19:18:02.832855 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cni-path\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.833514 kubelet[2851]: I0209 19:18:02.832896 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-run\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.833514 kubelet[2851]: I0209 19:18:02.833230 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-etc-cni-netd\") pod \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\" (UID: \"8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d\") " Feb 9 19:18:02.833514 kubelet[2851]: I0209 19:18:02.833324 2851 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-kernel\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.834234 kubelet[2851]: I0209 19:18:02.833785 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.834234 kubelet[2851]: I0209 19:18:02.831327 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.834234 kubelet[2851]: W0209 19:18:02.834128 2851 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:18:02.836819 kubelet[2851]: I0209 19:18:02.836760 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/242f2d06-b76d-496c-bf36-838530801be6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "242f2d06-b76d-496c-bf36-838530801be6" (UID: "242f2d06-b76d-496c-bf36-838530801be6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:18:02.836984 kubelet[2851]: I0209 19:18:02.836859 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.840225 kubelet[2851]: I0209 19:18:02.840169 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:18:02.841892 kubelet[2851]: I0209 19:18:02.841824 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:02.842136 kubelet[2851]: I0209 19:18:02.841925 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.842136 kubelet[2851]: I0209 19:18:02.841970 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.842136 kubelet[2851]: I0209 19:18:02.842011 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hostproc" (OuterVolumeSpecName: "hostproc") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.846260 kubelet[2851]: I0209 19:18:02.846208 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:02.846490 kubelet[2851]: I0209 19:18:02.846331 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-kube-api-access-vl2bz" (OuterVolumeSpecName: "kube-api-access-vl2bz") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "kube-api-access-vl2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:02.846675 kubelet[2851]: I0209 19:18:02.846374 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.846844 kubelet[2851]: I0209 19:18:02.846813 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cni-path" (OuterVolumeSpecName: "cni-path") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.847002 kubelet[2851]: I0209 19:18:02.846975 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" (UID: "8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:02.851213 kubelet[2851]: I0209 19:18:02.851143 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/242f2d06-b76d-496c-bf36-838530801be6-kube-api-access-tw9bx" (OuterVolumeSpecName: "kube-api-access-tw9bx") pod "242f2d06-b76d-496c-bf36-838530801be6" (UID: "242f2d06-b76d-496c-bf36-838530801be6"). InnerVolumeSpecName "kube-api-access-tw9bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:02.933969 kubelet[2851]: I0209 19:18:02.933930 2851 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-tw9bx\" (UniqueName: \"kubernetes.io/projected/242f2d06-b76d-496c-bf36-838530801be6-kube-api-access-tw9bx\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934217 kubelet[2851]: I0209 19:18:02.934194 2851 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-xtables-lock\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934347 kubelet[2851]: I0209 19:18:02.934327 2851 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cni-path\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934474 kubelet[2851]: I0209 19:18:02.934455 2851 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hubble-tls\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934660 kubelet[2851]: I0209 19:18:02.934639 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-run\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934780 kubelet[2851]: I0209 19:18:02.934760 2851 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-etc-cni-netd\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.934913 kubelet[2851]: I0209 19:18:02.934892 2851 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-host-proc-sys-net\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935036 kubelet[2851]: I0209 19:18:02.935017 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/242f2d06-b76d-496c-bf36-838530801be6-cilium-config-path\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935157 kubelet[2851]: I0209 19:18:02.935138 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-config-path\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935273 kubelet[2851]: I0209 19:18:02.935253 2851 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-clustermesh-secrets\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935397 kubelet[2851]: I0209 19:18:02.935378 2851 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-lib-modules\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935522 kubelet[2851]: I0209 19:18:02.935502 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-cilium-cgroup\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935664 kubelet[2851]: I0209 19:18:02.935645 2851 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-hostproc\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935786 kubelet[2851]: I0209 19:18:02.935766 2851 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-vl2bz\" (UniqueName: \"kubernetes.io/projected/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-kube-api-access-vl2bz\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:02.935901 kubelet[2851]: I0209 19:18:02.935881 2851 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d-bpf-maps\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:03.372914 kubelet[2851]: I0209 19:18:03.372786 2851 scope.go:115] "RemoveContainer" containerID="ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d" Feb 9 19:18:03.376668 env[1740]: time="2024-02-09T19:18:03.376408108Z" level=info msg="RemoveContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\"" Feb 9 19:18:03.398905 env[1740]: time="2024-02-09T19:18:03.394303631Z" level=info msg="RemoveContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" returns successfully" Feb 9 19:18:03.408018 systemd[1]: Removed slice kubepods-besteffort-pod242f2d06_b76d_496c_bf36_838530801be6.slice. Feb 9 19:18:03.412329 kubelet[2851]: I0209 19:18:03.412293 2851 scope.go:115] "RemoveContainer" containerID="ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d" Feb 9 19:18:03.413333 env[1740]: time="2024-02-09T19:18:03.412940936Z" level=error msg="ContainerStatus for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": not found" Feb 9 19:18:03.415713 systemd[1]: Removed slice kubepods-burstable-pod8bb3f0fd_9880_4fde_bc41_4bbdfd76da6d.slice. Feb 9 19:18:03.415897 systemd[1]: kubepods-burstable-pod8bb3f0fd_9880_4fde_bc41_4bbdfd76da6d.slice: Consumed 15.083s CPU time. Feb 9 19:18:03.423736 kubelet[2851]: E0209 19:18:03.419363 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": not found" containerID="ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d" Feb 9 19:18:03.423736 kubelet[2851]: I0209 19:18:03.419437 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d} err="failed to get container status \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": not found" Feb 9 19:18:03.423736 kubelet[2851]: I0209 19:18:03.419462 2851 scope.go:115] "RemoveContainer" containerID="64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031" Feb 9 19:18:03.419689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630-rootfs.mount: Deactivated successfully. Feb 9 19:18:03.419887 systemd[1]: var-lib-kubelet-pods-242f2d06\x2db76d\x2d496c\x2dbf36\x2d838530801be6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtw9bx.mount: Deactivated successfully. Feb 9 19:18:03.420031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f-rootfs.mount: Deactivated successfully. Feb 9 19:18:03.420181 systemd[1]: var-lib-kubelet-pods-8bb3f0fd\x2d9880\x2d4fde\x2dbc41\x2d4bbdfd76da6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvl2bz.mount: Deactivated successfully. Feb 9 19:18:03.420328 systemd[1]: var-lib-kubelet-pods-8bb3f0fd\x2d9880\x2d4fde\x2dbc41\x2d4bbdfd76da6d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:03.420487 systemd[1]: var-lib-kubelet-pods-8bb3f0fd\x2d9880\x2d4fde\x2dbc41\x2d4bbdfd76da6d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:18:03.425975 env[1740]: time="2024-02-09T19:18:03.425511026Z" level=info msg="RemoveContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\"" Feb 9 19:18:03.439022 env[1740]: time="2024-02-09T19:18:03.438936461Z" level=info msg="RemoveContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" returns successfully" Feb 9 19:18:03.439417 kubelet[2851]: I0209 19:18:03.439362 2851 scope.go:115] "RemoveContainer" containerID="1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063" Feb 9 19:18:03.441691 env[1740]: time="2024-02-09T19:18:03.441620712Z" level=info msg="RemoveContainer for \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\"" Feb 9 19:18:03.446878 env[1740]: time="2024-02-09T19:18:03.446794841Z" level=info msg="RemoveContainer for \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\" returns successfully" Feb 9 19:18:03.447399 kubelet[2851]: I0209 19:18:03.447314 2851 scope.go:115] "RemoveContainer" containerID="401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a" Feb 9 19:18:03.453919 env[1740]: time="2024-02-09T19:18:03.453446619Z" level=info msg="RemoveContainer for \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\"" Feb 9 19:18:03.458403 env[1740]: time="2024-02-09T19:18:03.458342946Z" level=info msg="RemoveContainer for \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\" returns successfully" Feb 9 19:18:03.460129 kubelet[2851]: I0209 19:18:03.459024 2851 scope.go:115] "RemoveContainer" containerID="08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6" Feb 9 19:18:03.464270 env[1740]: time="2024-02-09T19:18:03.464188810Z" level=info msg="RemoveContainer for \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\"" Feb 9 19:18:03.474050 env[1740]: time="2024-02-09T19:18:03.473964267Z" level=info msg="RemoveContainer for \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\" returns successfully" Feb 9 19:18:03.474491 kubelet[2851]: I0209 19:18:03.474462 2851 scope.go:115] "RemoveContainer" containerID="ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c" Feb 9 19:18:03.477342 env[1740]: time="2024-02-09T19:18:03.477239537Z" level=info msg="RemoveContainer for \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\"" Feb 9 19:18:03.484951 env[1740]: time="2024-02-09T19:18:03.484239405Z" level=info msg="RemoveContainer for \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\" returns successfully" Feb 9 19:18:03.485420 kubelet[2851]: I0209 19:18:03.485387 2851 scope.go:115] "RemoveContainer" containerID="64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031" Feb 9 19:18:03.486902 env[1740]: time="2024-02-09T19:18:03.486735019Z" level=error msg="ContainerStatus for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": not found" Feb 9 19:18:03.487359 kubelet[2851]: E0209 19:18:03.487328 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": not found" containerID="64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031" Feb 9 19:18:03.487643 kubelet[2851]: I0209 19:18:03.487618 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031} err="failed to get container status \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": rpc error: code = NotFound desc = an error occurred when try to find container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": not found" Feb 9 19:18:03.487803 kubelet[2851]: I0209 19:18:03.487781 2851 scope.go:115] "RemoveContainer" containerID="1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063" Feb 9 19:18:03.488508 env[1740]: time="2024-02-09T19:18:03.488323638Z" level=error msg="ContainerStatus for \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\": not found" Feb 9 19:18:03.488951 kubelet[2851]: E0209 19:18:03.488922 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\": not found" containerID="1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063" Feb 9 19:18:03.489223 kubelet[2851]: I0209 19:18:03.489175 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063} err="failed to get container status \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ff93a39668501992f32c474102aff1886114110e6db20688b4821b28fbb8063\": not found" Feb 9 19:18:03.489426 kubelet[2851]: I0209 19:18:03.489400 2851 scope.go:115] "RemoveContainer" containerID="401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a" Feb 9 19:18:03.490286 env[1740]: time="2024-02-09T19:18:03.490088221Z" level=error msg="ContainerStatus for \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\": not found" Feb 9 19:18:03.490886 kubelet[2851]: E0209 19:18:03.490838 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\": not found" containerID="401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a" Feb 9 19:18:03.491166 kubelet[2851]: I0209 19:18:03.491117 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a} err="failed to get container status \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\": rpc error: code = NotFound desc = an error occurred when try to find container \"401cfcdee34c4c486be22e0c0ea70728b5c950f8d1d337bb75fab3a375619d6a\": not found" Feb 9 19:18:03.491318 kubelet[2851]: I0209 19:18:03.491297 2851 scope.go:115] "RemoveContainer" containerID="08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6" Feb 9 19:18:03.492078 env[1740]: time="2024-02-09T19:18:03.491953646Z" level=error msg="ContainerStatus for \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\": not found" Feb 9 19:18:03.492543 kubelet[2851]: E0209 19:18:03.492488 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\": not found" containerID="08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6" Feb 9 19:18:03.492830 kubelet[2851]: I0209 19:18:03.492794 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6} err="failed to get container status \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\": rpc error: code = NotFound desc = an error occurred when try to find container \"08ee1117449e56ddbddffe2ac1db68cfdabfa20754561a4c2ed668b0f48cc2a6\": not found" Feb 9 19:18:03.492991 kubelet[2851]: I0209 19:18:03.492956 2851 scope.go:115] "RemoveContainer" containerID="ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c" Feb 9 19:18:03.493957 env[1740]: time="2024-02-09T19:18:03.493503935Z" level=error msg="ContainerStatus for \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\": not found" Feb 9 19:18:03.494450 kubelet[2851]: E0209 19:18:03.494394 2851 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\": not found" containerID="ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c" Feb 9 19:18:03.494808 kubelet[2851]: I0209 19:18:03.494774 2851 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c} err="failed to get container status \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ace5c46eb6e8f8977dcc47b34f2e8ffa162b964b9de791463a9ab9c77e237b7c\": not found" Feb 9 19:18:03.885634 env[1740]: time="2024-02-09T19:18:03.883730129Z" level=info msg="StopContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" with timeout 1 (s)" Feb 9 19:18:03.885634 env[1740]: time="2024-02-09T19:18:03.883851132Z" level=error msg="StopContainer for \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": not found" Feb 9 19:18:03.885634 env[1740]: time="2024-02-09T19:18:03.884044666Z" level=info msg="StopContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" with timeout 1 (s)" Feb 9 19:18:03.885634 env[1740]: time="2024-02-09T19:18:03.884088000Z" level=error msg="StopContainer for \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": not found" Feb 9 19:18:03.886639 kubelet[2851]: E0209 19:18:03.886602 2851 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031\": not found" containerID="64f77334060744eb30dd56f0716a62bf536268d3ba2c1871a2ec3e7a6544f031" Feb 9 19:18:03.889239 kubelet[2851]: E0209 19:18:03.889203 2851 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d\": not found" containerID="ce635839d098212d89e6390bc373488820d1f40647bd086edc0abe41dba7497d" Feb 9 19:18:03.890195 env[1740]: time="2024-02-09T19:18:03.889719665Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:18:03.890195 env[1740]: time="2024-02-09T19:18:03.889953017Z" level=info msg="TearDown network for sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" successfully" Feb 9 19:18:03.890195 env[1740]: time="2024-02-09T19:18:03.890029341Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" returns successfully" Feb 9 19:18:03.890681 kubelet[2851]: I0209 19:18:03.888884 2851 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=242f2d06-b76d-496c-bf36-838530801be6 path="/var/lib/kubelet/pods/242f2d06-b76d-496c-bf36-838530801be6/volumes" Feb 9 19:18:03.891860 env[1740]: time="2024-02-09T19:18:03.891770032Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:18:03.892033 env[1740]: time="2024-02-09T19:18:03.891948805Z" level=info msg="TearDown network for sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" successfully" Feb 9 19:18:03.892033 env[1740]: time="2024-02-09T19:18:03.892011712Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" returns successfully" Feb 9 19:18:03.894837 kubelet[2851]: I0209 19:18:03.894799 2851 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d path="/var/lib/kubelet/pods/8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d/volumes" Feb 9 19:18:04.312454 sshd[4567]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:04.317726 systemd[1]: sshd@23-172.31.28.78:22-147.75.109.163:42786.service: Deactivated successfully. Feb 9 19:18:04.319004 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:18:04.319368 systemd[1]: session-24.scope: Consumed 1.596s CPU time. Feb 9 19:18:04.321051 systemd-logind[1727]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:18:04.323369 systemd-logind[1727]: Removed session 24. Feb 9 19:18:04.339761 systemd[1]: Started sshd@24-172.31.28.78:22-147.75.109.163:42796.service. Feb 9 19:18:04.507302 sshd[4727]: Accepted publickey for core from 147.75.109.163 port 42796 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:04.510269 sshd[4727]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:04.519516 systemd[1]: Started session-25.scope. Feb 9 19:18:04.521327 systemd-logind[1727]: New session 25 of user core. Feb 9 19:18:05.566435 sshd[4727]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:05.573699 systemd[1]: sshd@24-172.31.28.78:22-147.75.109.163:42796.service: Deactivated successfully. Feb 9 19:18:05.575136 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:18:05.577862 systemd-logind[1727]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:18:05.581088 systemd-logind[1727]: Removed session 25. Feb 9 19:18:05.588453 kubelet[2851]: I0209 19:18:05.588399 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:18:05.589163 kubelet[2851]: E0209 19:18:05.589129 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="mount-cgroup" Feb 9 19:18:05.589297 kubelet[2851]: E0209 19:18:05.589274 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="mount-bpf-fs" Feb 9 19:18:05.589411 kubelet[2851]: E0209 19:18:05.589391 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="clean-cilium-state" Feb 9 19:18:05.590602 kubelet[2851]: E0209 19:18:05.590531 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="cilium-agent" Feb 9 19:18:05.590858 kubelet[2851]: E0209 19:18:05.590830 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="apply-sysctl-overwrites" Feb 9 19:18:05.590995 kubelet[2851]: E0209 19:18:05.590972 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="242f2d06-b76d-496c-bf36-838530801be6" containerName="cilium-operator" Feb 9 19:18:05.591212 kubelet[2851]: I0209 19:18:05.591161 2851 memory_manager.go:346] "RemoveStaleState removing state" podUID="8bb3f0fd-9880-4fde-bc41-4bbdfd76da6d" containerName="cilium-agent" Feb 9 19:18:05.591362 kubelet[2851]: I0209 19:18:05.591335 2851 memory_manager.go:346] "RemoveStaleState removing state" podUID="242f2d06-b76d-496c-bf36-838530801be6" containerName="cilium-operator" Feb 9 19:18:05.600354 systemd[1]: Started sshd@25-172.31.28.78:22-147.75.109.163:47398.service. Feb 9 19:18:05.624311 systemd[1]: Created slice kubepods-burstable-pod3baf3be0_8ebe_410d_926c_e740cdbbdcc6.slice. Feb 9 19:18:05.633999 kubelet[2851]: W0209 19:18:05.633931 2851 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.633999 kubelet[2851]: E0209 19:18:05.633998 2851 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.634280 kubelet[2851]: W0209 19:18:05.634115 2851 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.634280 kubelet[2851]: E0209 19:18:05.634141 2851 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.634419 kubelet[2851]: W0209 19:18:05.634324 2851 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.634419 kubelet[2851]: E0209 19:18:05.634356 2851 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ip-172-31-28-78" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-28-78' and this object Feb 9 19:18:05.656020 kubelet[2851]: I0209 19:18:05.655922 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-cgroup\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656185 kubelet[2851]: I0209 19:18:05.656065 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cni-path\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656185 kubelet[2851]: I0209 19:18:05.656149 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-ipsec-secrets\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656343 kubelet[2851]: I0209 19:18:05.656224 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-config-path\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656343 kubelet[2851]: I0209 19:18:05.656279 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-bpf-maps\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656459 kubelet[2851]: I0209 19:18:05.656349 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-etc-cni-netd\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656459 kubelet[2851]: I0209 19:18:05.656417 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-lib-modules\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656641 kubelet[2851]: I0209 19:18:05.656495 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hubble-tls\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656641 kubelet[2851]: I0209 19:18:05.656543 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8qj\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-kube-api-access-4w8qj\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656776 kubelet[2851]: I0209 19:18:05.656664 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-net\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656776 kubelet[2851]: I0209 19:18:05.656715 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-clustermesh-secrets\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656901 kubelet[2851]: I0209 19:18:05.656790 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-run\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.656901 kubelet[2851]: I0209 19:18:05.656861 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-xtables-lock\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.657036 kubelet[2851]: I0209 19:18:05.656935 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-kernel\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.657036 kubelet[2851]: I0209 19:18:05.656982 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hostproc\") pod \"cilium-25s5n\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " pod="kube-system/cilium-25s5n" Feb 9 19:18:05.809392 sshd[4737]: Accepted publickey for core from 147.75.109.163 port 47398 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:05.813064 sshd[4737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:05.823502 systemd[1]: Started session-26.scope. Feb 9 19:18:05.824374 systemd-logind[1727]: New session 26 of user core. Feb 9 19:18:06.097008 sshd[4737]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:06.103223 systemd[1]: sshd@25-172.31.28.78:22-147.75.109.163:47398.service: Deactivated successfully. Feb 9 19:18:06.105488 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:18:06.108232 systemd-logind[1727]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:18:06.111265 systemd-logind[1727]: Removed session 26. Feb 9 19:18:06.129941 systemd[1]: Started sshd@26-172.31.28.78:22-147.75.109.163:47400.service. Feb 9 19:18:06.311295 sshd[4752]: Accepted publickey for core from 147.75.109.163 port 47400 ssh2: RSA SHA256:vbbYXSA+vx4OxGE8RCTI42TSNHgOaZKYEuMHy2EWP78 Feb 9 19:18:06.313898 sshd[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:18:06.325162 systemd-logind[1727]: New session 27 of user core. Feb 9 19:18:06.325306 systemd[1]: Started session-27.scope. Feb 9 19:18:06.836850 env[1740]: time="2024-02-09T19:18:06.836259408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25s5n,Uid:3baf3be0-8ebe-410d-926c-e740cdbbdcc6,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:06.868151 env[1740]: time="2024-02-09T19:18:06.868010548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:06.868151 env[1740]: time="2024-02-09T19:18:06.868091216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:06.868457 env[1740]: time="2024-02-09T19:18:06.868118470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:06.873684 env[1740]: time="2024-02-09T19:18:06.868942625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1 pid=4769 runtime=io.containerd.runc.v2 Feb 9 19:18:06.888415 kubelet[2851]: E0209 19:18:06.888351 2851 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:18:06.907015 systemd[1]: run-containerd-runc-k8s.io-6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1-runc.JeyKJ9.mount: Deactivated successfully. Feb 9 19:18:06.916645 systemd[1]: Started cri-containerd-6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1.scope. Feb 9 19:18:06.978902 env[1740]: time="2024-02-09T19:18:06.978824605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-25s5n,Uid:3baf3be0-8ebe-410d-926c-e740cdbbdcc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\"" Feb 9 19:18:06.987478 env[1740]: time="2024-02-09T19:18:06.987383915Z" level=info msg="CreateContainer within sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:18:07.018722 env[1740]: time="2024-02-09T19:18:07.018623905Z" level=info msg="CreateContainer within sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\"" Feb 9 19:18:07.022636 env[1740]: time="2024-02-09T19:18:07.022530762Z" level=info msg="StartContainer for \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\"" Feb 9 19:18:07.059812 systemd[1]: Started cri-containerd-a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f.scope. Feb 9 19:18:07.088601 systemd[1]: cri-containerd-a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f.scope: Deactivated successfully. Feb 9 19:18:07.118875 env[1740]: time="2024-02-09T19:18:07.118803186Z" level=info msg="shim disconnected" id=a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f Feb 9 19:18:07.119264 env[1740]: time="2024-02-09T19:18:07.119217868Z" level=warning msg="cleaning up after shim disconnected" id=a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f namespace=k8s.io Feb 9 19:18:07.119437 env[1740]: time="2024-02-09T19:18:07.119404082Z" level=info msg="cleaning up dead shim" Feb 9 19:18:07.144337 env[1740]: time="2024-02-09T19:18:07.144269228Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4826 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:18:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:18:07.145210 env[1740]: time="2024-02-09T19:18:07.145046510Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Feb 9 19:18:07.146772 env[1740]: time="2024-02-09T19:18:07.146699502Z" level=error msg="Failed to pipe stdout of container \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\"" error="reading from a closed fifo" Feb 9 19:18:07.147078 env[1740]: time="2024-02-09T19:18:07.147018227Z" level=error msg="Failed to pipe stderr of container \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\"" error="reading from a closed fifo" Feb 9 19:18:07.149614 env[1740]: time="2024-02-09T19:18:07.149439644Z" level=error msg="StartContainer for \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:18:07.150626 kubelet[2851]: E0209 19:18:07.150010 2851 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f" Feb 9 19:18:07.150626 kubelet[2851]: E0209 19:18:07.150168 2851 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:18:07.150626 kubelet[2851]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:18:07.150626 kubelet[2851]: rm /hostbin/cilium-mount Feb 9 19:18:07.151359 kubelet[2851]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-4w8qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-25s5n_kube-system(3baf3be0-8ebe-410d-926c-e740cdbbdcc6): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:18:07.151693 kubelet[2851]: E0209 19:18:07.150233 2851 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-25s5n" podUID=3baf3be0-8ebe-410d-926c-e740cdbbdcc6 Feb 9 19:18:07.415505 env[1740]: time="2024-02-09T19:18:07.414244776Z" level=info msg="StopPodSandbox for \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\"" Feb 9 19:18:07.415505 env[1740]: time="2024-02-09T19:18:07.414388171Z" level=info msg="Container to stop \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:18:07.430224 systemd[1]: cri-containerd-6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1.scope: Deactivated successfully. Feb 9 19:18:07.483208 env[1740]: time="2024-02-09T19:18:07.483130011Z" level=info msg="shim disconnected" id=6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1 Feb 9 19:18:07.483208 env[1740]: time="2024-02-09T19:18:07.483203970Z" level=warning msg="cleaning up after shim disconnected" id=6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1 namespace=k8s.io Feb 9 19:18:07.483599 env[1740]: time="2024-02-09T19:18:07.483226568Z" level=info msg="cleaning up dead shim" Feb 9 19:18:07.498116 env[1740]: time="2024-02-09T19:18:07.498030317Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4857 runtime=io.containerd.runc.v2\n" Feb 9 19:18:07.498740 env[1740]: time="2024-02-09T19:18:07.498679480Z" level=info msg="TearDown network for sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" successfully" Feb 9 19:18:07.498924 env[1740]: time="2024-02-09T19:18:07.498737527Z" level=info msg="StopPodSandbox for \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" returns successfully" Feb 9 19:18:07.571437 kubelet[2851]: I0209 19:18:07.571372 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-bpf-maps\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571437 kubelet[2851]: I0209 19:18:07.571443 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-lib-modules\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571483 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-run\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571525 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-etc-cni-netd\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571605 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hubble-tls\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571647 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-net\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571689 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-xtables-lock\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.571760 kubelet[2851]: I0209 19:18:07.571733 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-config-path\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.571797 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w8qj\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-kube-api-access-4w8qj\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.571838 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cni-path\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.571881 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-ipsec-secrets\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.571918 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hostproc\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.571955 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-cgroup\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572110 kubelet[2851]: I0209 19:18:07.572003 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-clustermesh-secrets\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572469 kubelet[2851]: I0209 19:18:07.572043 2851 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-kernel\") pod \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\" (UID: \"3baf3be0-8ebe-410d-926c-e740cdbbdcc6\") " Feb 9 19:18:07.572469 kubelet[2851]: I0209 19:18:07.572126 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.572469 kubelet[2851]: I0209 19:18:07.572175 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.572469 kubelet[2851]: I0209 19:18:07.572213 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.572469 kubelet[2851]: I0209 19:18:07.572253 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.572850 kubelet[2851]: I0209 19:18:07.572290 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.575337 kubelet[2851]: I0209 19:18:07.572975 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cni-path" (OuterVolumeSpecName: "cni-path") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.575337 kubelet[2851]: I0209 19:18:07.573058 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.575337 kubelet[2851]: I0209 19:18:07.573101 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.575337 kubelet[2851]: W0209 19:18:07.573357 2851 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/3baf3be0-8ebe-410d-926c-e740cdbbdcc6/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:18:07.578250 kubelet[2851]: I0209 19:18:07.578189 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hostproc" (OuterVolumeSpecName: "hostproc") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.578428 kubelet[2851]: I0209 19:18:07.578267 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:18:07.580901 kubelet[2851]: I0209 19:18:07.580831 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:18:07.581114 kubelet[2851]: I0209 19:18:07.581000 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:07.585022 kubelet[2851]: I0209 19:18:07.584948 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:07.586645 kubelet[2851]: I0209 19:18:07.586599 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-kube-api-access-4w8qj" (OuterVolumeSpecName: "kube-api-access-4w8qj") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "kube-api-access-4w8qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:18:07.589844 kubelet[2851]: I0209 19:18:07.589792 2851 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3baf3be0-8ebe-410d-926c-e740cdbbdcc6" (UID: "3baf3be0-8ebe-410d-926c-e740cdbbdcc6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:18:07.672376 kubelet[2851]: I0209 19:18:07.672332 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-config-path\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.672750 kubelet[2851]: I0209 19:18:07.672716 2851 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-4w8qj\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-kube-api-access-4w8qj\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.672992 kubelet[2851]: I0209 19:18:07.672949 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-ipsec-secrets\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.673185 kubelet[2851]: I0209 19:18:07.673160 2851 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cni-path\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.673382 kubelet[2851]: I0209 19:18:07.673359 2851 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hostproc\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.673600 kubelet[2851]: I0209 19:18:07.673539 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-cgroup\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.673784 kubelet[2851]: I0209 19:18:07.673761 2851 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-clustermesh-secrets\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.673961 kubelet[2851]: I0209 19:18:07.673938 2851 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-kernel\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.674134 kubelet[2851]: I0209 19:18:07.674111 2851 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-bpf-maps\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.674330 kubelet[2851]: I0209 19:18:07.674306 2851 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-lib-modules\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.674524 kubelet[2851]: I0209 19:18:07.674498 2851 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-cilium-run\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.674727 kubelet[2851]: I0209 19:18:07.674701 2851 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-etc-cni-netd\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.674922 kubelet[2851]: I0209 19:18:07.674896 2851 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-hubble-tls\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.675113 kubelet[2851]: I0209 19:18:07.675088 2851 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-host-proc-sys-net\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.675315 kubelet[2851]: I0209 19:18:07.675290 2851 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3baf3be0-8ebe-410d-926c-e740cdbbdcc6-xtables-lock\") on node \"ip-172-31-28-78\" DevicePath \"\"" Feb 9 19:18:07.851821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820361473.mount: Deactivated successfully. Feb 9 19:18:07.852025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1-rootfs.mount: Deactivated successfully. Feb 9 19:18:07.852174 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1-shm.mount: Deactivated successfully. Feb 9 19:18:07.852318 systemd[1]: var-lib-kubelet-pods-3baf3be0\x2d8ebe\x2d410d\x2d926c\x2de740cdbbdcc6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:18:07.852459 systemd[1]: var-lib-kubelet-pods-3baf3be0\x2d8ebe\x2d410d\x2d926c\x2de740cdbbdcc6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:07.852665 systemd[1]: var-lib-kubelet-pods-3baf3be0\x2d8ebe\x2d410d\x2d926c\x2de740cdbbdcc6-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:18:07.852852 systemd[1]: var-lib-kubelet-pods-3baf3be0\x2d8ebe\x2d410d\x2d926c\x2de740cdbbdcc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4w8qj.mount: Deactivated successfully. Feb 9 19:18:07.892238 systemd[1]: Removed slice kubepods-burstable-pod3baf3be0_8ebe_410d_926c_e740cdbbdcc6.slice. Feb 9 19:18:08.417830 kubelet[2851]: I0209 19:18:08.417796 2851 scope.go:115] "RemoveContainer" containerID="a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f" Feb 9 19:18:08.424367 env[1740]: time="2024-02-09T19:18:08.423271181Z" level=info msg="RemoveContainer for \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\"" Feb 9 19:18:08.428274 env[1740]: time="2024-02-09T19:18:08.428201405Z" level=info msg="RemoveContainer for \"a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f\" returns successfully" Feb 9 19:18:08.481618 kubelet[2851]: I0209 19:18:08.481538 2851 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:18:08.481822 kubelet[2851]: E0209 19:18:08.481650 2851 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3baf3be0-8ebe-410d-926c-e740cdbbdcc6" containerName="mount-cgroup" Feb 9 19:18:08.481822 kubelet[2851]: I0209 19:18:08.481702 2851 memory_manager.go:346] "RemoveStaleState removing state" podUID="3baf3be0-8ebe-410d-926c-e740cdbbdcc6" containerName="mount-cgroup" Feb 9 19:18:08.492775 systemd[1]: Created slice kubepods-burstable-pod637183a1_f9e1_48ab_b2cc_6fb25eb67a27.slice. Feb 9 19:18:08.585209 kubelet[2851]: I0209 19:18:08.585161 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-hubble-tls\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.585468 kubelet[2851]: I0209 19:18:08.585434 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-cilium-ipsec-secrets\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.585757 kubelet[2851]: I0209 19:18:08.585721 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-cilium-config-path\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.585865 kubelet[2851]: I0209 19:18:08.585790 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-cilium-run\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.585865 kubelet[2851]: I0209 19:18:08.585836 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-cilium-cgroup\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586002 kubelet[2851]: I0209 19:18:08.585885 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-etc-cni-netd\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586002 kubelet[2851]: I0209 19:18:08.585928 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-xtables-lock\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586002 kubelet[2851]: I0209 19:18:08.585972 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-clustermesh-secrets\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586197 kubelet[2851]: I0209 19:18:08.586014 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-host-proc-sys-kernel\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586197 kubelet[2851]: I0209 19:18:08.586057 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-hostproc\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586197 kubelet[2851]: I0209 19:18:08.586098 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-cni-path\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586197 kubelet[2851]: I0209 19:18:08.586143 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-host-proc-sys-net\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586197 kubelet[2851]: I0209 19:18:08.586185 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrr5t\" (UniqueName: \"kubernetes.io/projected/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-kube-api-access-vrr5t\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586489 kubelet[2851]: I0209 19:18:08.586229 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-lib-modules\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.586489 kubelet[2851]: I0209 19:18:08.586272 2851 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/637183a1-f9e1-48ab-b2cc-6fb25eb67a27-bpf-maps\") pod \"cilium-7442m\" (UID: \"637183a1-f9e1-48ab-b2cc-6fb25eb67a27\") " pod="kube-system/cilium-7442m" Feb 9 19:18:08.799635 env[1740]: time="2024-02-09T19:18:08.799536238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7442m,Uid:637183a1-f9e1-48ab-b2cc-6fb25eb67a27,Namespace:kube-system,Attempt:0,}" Feb 9 19:18:08.821901 env[1740]: time="2024-02-09T19:18:08.821744102Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:18:08.821901 env[1740]: time="2024-02-09T19:18:08.821842327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:18:08.822238 env[1740]: time="2024-02-09T19:18:08.821870025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:18:08.822796 env[1740]: time="2024-02-09T19:18:08.822707250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62 pid=4886 runtime=io.containerd.runc.v2 Feb 9 19:18:08.843750 systemd[1]: Started cri-containerd-3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62.scope. Feb 9 19:18:08.901401 env[1740]: time="2024-02-09T19:18:08.901346473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7442m,Uid:637183a1-f9e1-48ab-b2cc-6fb25eb67a27,Namespace:kube-system,Attempt:0,} returns sandbox id \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\"" Feb 9 19:18:08.905922 env[1740]: time="2024-02-09T19:18:08.905860563Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:18:08.932517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1530112808.mount: Deactivated successfully. Feb 9 19:18:08.948543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535518036.mount: Deactivated successfully. Feb 9 19:18:08.958338 env[1740]: time="2024-02-09T19:18:08.958271840Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7\"" Feb 9 19:18:08.959746 env[1740]: time="2024-02-09T19:18:08.959692608Z" level=info msg="StartContainer for \"43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7\"" Feb 9 19:18:08.990347 systemd[1]: Started cri-containerd-43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7.scope. Feb 9 19:18:09.047911 env[1740]: time="2024-02-09T19:18:09.047834015Z" level=info msg="StartContainer for \"43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7\" returns successfully" Feb 9 19:18:09.072809 systemd[1]: cri-containerd-43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7.scope: Deactivated successfully. Feb 9 19:18:09.135439 env[1740]: time="2024-02-09T19:18:09.135361614Z" level=info msg="shim disconnected" id=43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7 Feb 9 19:18:09.135439 env[1740]: time="2024-02-09T19:18:09.135431722Z" level=warning msg="cleaning up after shim disconnected" id=43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7 namespace=k8s.io Feb 9 19:18:09.135823 env[1740]: time="2024-02-09T19:18:09.135454475Z" level=info msg="cleaning up dead shim" Feb 9 19:18:09.156647 env[1740]: time="2024-02-09T19:18:09.156575506Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4973 runtime=io.containerd.runc.v2\n" Feb 9 19:18:09.432884 env[1740]: time="2024-02-09T19:18:09.432823044Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:18:09.453302 env[1740]: time="2024-02-09T19:18:09.453234333Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0\"" Feb 9 19:18:09.454867 env[1740]: time="2024-02-09T19:18:09.454806250Z" level=info msg="StartContainer for \"b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0\"" Feb 9 19:18:09.495785 systemd[1]: Started cri-containerd-b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0.scope. Feb 9 19:18:09.556897 env[1740]: time="2024-02-09T19:18:09.556794556Z" level=info msg="StartContainer for \"b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0\" returns successfully" Feb 9 19:18:09.575859 systemd[1]: cri-containerd-b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0.scope: Deactivated successfully. Feb 9 19:18:09.619913 env[1740]: time="2024-02-09T19:18:09.619839982Z" level=info msg="shim disconnected" id=b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0 Feb 9 19:18:09.619913 env[1740]: time="2024-02-09T19:18:09.619909682Z" level=warning msg="cleaning up after shim disconnected" id=b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0 namespace=k8s.io Feb 9 19:18:09.620277 env[1740]: time="2024-02-09T19:18:09.619932411Z" level=info msg="cleaning up dead shim" Feb 9 19:18:09.634239 env[1740]: time="2024-02-09T19:18:09.634133485Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5039 runtime=io.containerd.runc.v2\n" Feb 9 19:18:09.895946 kubelet[2851]: I0209 19:18:09.895830 2851 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3baf3be0-8ebe-410d-926c-e740cdbbdcc6 path="/var/lib/kubelet/pods/3baf3be0-8ebe-410d-926c-e740cdbbdcc6/volumes" Feb 9 19:18:10.224618 kubelet[2851]: W0209 19:18:10.224536 2851 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3baf3be0_8ebe_410d_926c_e740cdbbdcc6.slice/cri-containerd-a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f.scope WatchSource:0}: container "a09a285ff4309edf15f2a2a15cd60c6d436db07837372e22d4b29e3a4b36181f" in namespace "k8s.io": not found Feb 9 19:18:10.436135 env[1740]: time="2024-02-09T19:18:10.436058879Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:18:10.477836 env[1740]: time="2024-02-09T19:18:10.477658643Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752\"" Feb 9 19:18:10.482801 env[1740]: time="2024-02-09T19:18:10.479165289Z" level=info msg="StartContainer for \"2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752\"" Feb 9 19:18:10.517644 systemd[1]: Started cri-containerd-2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752.scope. Feb 9 19:18:10.590220 env[1740]: time="2024-02-09T19:18:10.590160150Z" level=info msg="StartContainer for \"2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752\" returns successfully" Feb 9 19:18:10.590625 systemd[1]: cri-containerd-2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752.scope: Deactivated successfully. Feb 9 19:18:10.656027 env[1740]: time="2024-02-09T19:18:10.655944274Z" level=info msg="shim disconnected" id=2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752 Feb 9 19:18:10.656027 env[1740]: time="2024-02-09T19:18:10.656019050Z" level=warning msg="cleaning up after shim disconnected" id=2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752 namespace=k8s.io Feb 9 19:18:10.656527 env[1740]: time="2024-02-09T19:18:10.656041780Z" level=info msg="cleaning up dead shim" Feb 9 19:18:10.682708 env[1740]: time="2024-02-09T19:18:10.682626054Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5097 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:18:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 9 19:18:10.857179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752-rootfs.mount: Deactivated successfully. Feb 9 19:18:11.441187 env[1740]: time="2024-02-09T19:18:11.441044323Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:18:11.471829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130352257.mount: Deactivated successfully. Feb 9 19:18:11.480582 env[1740]: time="2024-02-09T19:18:11.477779850Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9\"" Feb 9 19:18:11.481381 env[1740]: time="2024-02-09T19:18:11.481336856Z" level=info msg="StartContainer for \"d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9\"" Feb 9 19:18:11.523036 systemd[1]: Started cri-containerd-d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9.scope. Feb 9 19:18:11.582736 systemd[1]: cri-containerd-d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9.scope: Deactivated successfully. Feb 9 19:18:11.586937 env[1740]: time="2024-02-09T19:18:11.586878183Z" level=info msg="StartContainer for \"d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9\" returns successfully" Feb 9 19:18:11.636392 env[1740]: time="2024-02-09T19:18:11.636329579Z" level=info msg="shim disconnected" id=d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9 Feb 9 19:18:11.636866 env[1740]: time="2024-02-09T19:18:11.636834903Z" level=warning msg="cleaning up after shim disconnected" id=d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9 namespace=k8s.io Feb 9 19:18:11.637047 env[1740]: time="2024-02-09T19:18:11.637018405Z" level=info msg="cleaning up dead shim" Feb 9 19:18:11.653540 env[1740]: time="2024-02-09T19:18:11.653483302Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5153 runtime=io.containerd.runc.v2\n" Feb 9 19:18:11.857266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9-rootfs.mount: Deactivated successfully. Feb 9 19:18:11.889673 kubelet[2851]: E0209 19:18:11.889622 2851 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:18:12.456819 env[1740]: time="2024-02-09T19:18:12.456715192Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:18:12.487275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2897961452.mount: Deactivated successfully. Feb 9 19:18:12.504999 env[1740]: time="2024-02-09T19:18:12.504910433Z" level=info msg="CreateContainer within sandbox \"3be3c7bd26cf3658169047719ccb2fc2e19df5fe59ee1cc78e105a5063866a62\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4\"" Feb 9 19:18:12.506679 env[1740]: time="2024-02-09T19:18:12.506605742Z" level=info msg="StartContainer for \"6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4\"" Feb 9 19:18:12.560503 systemd[1]: Started cri-containerd-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4.scope. Feb 9 19:18:12.659523 env[1740]: time="2024-02-09T19:18:12.659444197Z" level=info msg="StartContainer for \"6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4\" returns successfully" Feb 9 19:18:13.341329 kubelet[2851]: W0209 19:18:13.341280 2851 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637183a1_f9e1_48ab_b2cc_6fb25eb67a27.slice/cri-containerd-43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7.scope WatchSource:0}: task 43cec2375e3db9c635f79326ce78b1ca3610b450c3d4303b7cd7f88164d595a7 not found: not found Feb 9 19:18:13.663613 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 19:18:14.931789 systemd[1]: run-containerd-runc-k8s.io-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4-runc.ZTbh0Q.mount: Deactivated successfully. Feb 9 19:18:14.949065 kubelet[2851]: I0209 19:18:14.948424 2851 setters.go:548] "Node became not ready" node="ip-172-31-28-78" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:18:14.948326779 +0000 UTC m=+143.651438622 LastTransitionTime:2024-02-09 19:18:14.948326779 +0000 UTC m=+143.651438622 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:18:16.452622 kubelet[2851]: W0209 19:18:16.452457 2851 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637183a1_f9e1_48ab_b2cc_6fb25eb67a27.slice/cri-containerd-b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0.scope WatchSource:0}: task b2a5763e2f21767208b0d2e063e49653ae59b17926847fcdf971b878e0c3f3b0 not found: not found Feb 9 19:18:17.194805 systemd[1]: run-containerd-runc-k8s.io-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4-runc.buQv7M.mount: Deactivated successfully. Feb 9 19:18:17.688197 (udev-worker)[5714]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:17.692687 systemd-networkd[1537]: lxc_health: Link UP Feb 9 19:18:17.700677 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:18:17.700591 systemd-networkd[1537]: lxc_health: Gained carrier Feb 9 19:18:17.701316 (udev-worker)[5715]: Network interface NamePolicy= disabled on kernel command line. Feb 9 19:18:18.835480 kubelet[2851]: I0209 19:18:18.835383 2851 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7442m" podStartSLOduration=10.835329045 pod.CreationTimestamp="2024-02-09 19:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:18:13.485146964 +0000 UTC m=+142.188258855" watchObservedRunningTime="2024-02-09 19:18:18.835329045 +0000 UTC m=+147.538440912" Feb 9 19:18:19.553166 systemd[1]: run-containerd-runc-k8s.io-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4-runc.p0P9R6.mount: Deactivated successfully. Feb 9 19:18:19.562281 kubelet[2851]: W0209 19:18:19.562232 2851 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637183a1_f9e1_48ab_b2cc_6fb25eb67a27.slice/cri-containerd-2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752.scope WatchSource:0}: task 2591a45b740b7508cc9a3623610e4043aef5c93761953457b7c5e8624f126752 not found: not found Feb 9 19:18:19.688873 systemd-networkd[1537]: lxc_health: Gained IPv6LL Feb 9 19:18:21.875117 systemd[1]: run-containerd-runc-k8s.io-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4-runc.yVqFp7.mount: Deactivated successfully. Feb 9 19:18:22.689174 kubelet[2851]: W0209 19:18:22.687964 2851 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod637183a1_f9e1_48ab_b2cc_6fb25eb67a27.slice/cri-containerd-d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9.scope WatchSource:0}: task d1c4458cb538a728199ef1b1bcb97b1e12d84b1cf2a93e2539a5dc89bfeba7d9 not found: not found Feb 9 19:18:24.177695 systemd[1]: run-containerd-runc-k8s.io-6a574f440515ed7c96c05d9a5fa02dec4ee5d34c4bba70af2235b57f633954d4-runc.oxdGMR.mount: Deactivated successfully. Feb 9 19:18:24.335354 sshd[4752]: pam_unix(sshd:session): session closed for user core Feb 9 19:18:24.346220 systemd[1]: sshd@26-172.31.28.78:22-147.75.109.163:47400.service: Deactivated successfully. Feb 9 19:18:24.347665 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:18:24.349400 systemd-logind[1727]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:18:24.352036 systemd-logind[1727]: Removed session 27. Feb 9 19:18:50.005845 systemd[1]: cri-containerd-a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f.scope: Deactivated successfully. Feb 9 19:18:50.006362 systemd[1]: cri-containerd-a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f.scope: Consumed 5.294s CPU time. Feb 9 19:18:50.045365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f-rootfs.mount: Deactivated successfully. Feb 9 19:18:50.058351 env[1740]: time="2024-02-09T19:18:50.058279781Z" level=info msg="shim disconnected" id=a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f Feb 9 19:18:50.059095 env[1740]: time="2024-02-09T19:18:50.058352013Z" level=warning msg="cleaning up after shim disconnected" id=a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f namespace=k8s.io Feb 9 19:18:50.059095 env[1740]: time="2024-02-09T19:18:50.058375531Z" level=info msg="cleaning up dead shim" Feb 9 19:18:50.073124 env[1740]: time="2024-02-09T19:18:50.073041824Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5838 runtime=io.containerd.runc.v2\n" Feb 9 19:18:50.558679 kubelet[2851]: I0209 19:18:50.555810 2851 scope.go:115] "RemoveContainer" containerID="a610a52bb15f5bcf06f91b8af90260a3376d820ee45c2109e06c8be0e81fd54f" Feb 9 19:18:50.560839 env[1740]: time="2024-02-09T19:18:50.560762730Z" level=info msg="CreateContainer within sandbox \"edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:18:50.580678 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651912006.mount: Deactivated successfully. Feb 9 19:18:50.591645 env[1740]: time="2024-02-09T19:18:50.591539939Z" level=info msg="CreateContainer within sandbox \"edba829086c47ab8a351dcc2867c1ad6ee84bd4f5eea149437125242e9b89c37\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3ec5423706d222606c1af720d3be546c8b0e190edf41a87b2907ec23d0b4608d\"" Feb 9 19:18:50.592726 env[1740]: time="2024-02-09T19:18:50.592664999Z" level=info msg="StartContainer for \"3ec5423706d222606c1af720d3be546c8b0e190edf41a87b2907ec23d0b4608d\"" Feb 9 19:18:50.633721 systemd[1]: Started cri-containerd-3ec5423706d222606c1af720d3be546c8b0e190edf41a87b2907ec23d0b4608d.scope. Feb 9 19:18:50.715664 env[1740]: time="2024-02-09T19:18:50.715534279Z" level=info msg="StartContainer for \"3ec5423706d222606c1af720d3be546c8b0e190edf41a87b2907ec23d0b4608d\" returns successfully" Feb 9 19:18:51.526527 env[1740]: time="2024-02-09T19:18:51.526474939Z" level=info msg="StopPodSandbox for \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\"" Feb 9 19:18:51.527331 env[1740]: time="2024-02-09T19:18:51.527264109Z" level=info msg="TearDown network for sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" successfully" Feb 9 19:18:51.527521 env[1740]: time="2024-02-09T19:18:51.527487706Z" level=info msg="StopPodSandbox for \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" returns successfully" Feb 9 19:18:51.529078 env[1740]: time="2024-02-09T19:18:51.529026042Z" level=info msg="RemovePodSandbox for \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\"" Feb 9 19:18:51.529451 env[1740]: time="2024-02-09T19:18:51.529366399Z" level=info msg="Forcibly stopping sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\"" Feb 9 19:18:51.529960 env[1740]: time="2024-02-09T19:18:51.529912198Z" level=info msg="TearDown network for sandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" successfully" Feb 9 19:18:51.535835 env[1740]: time="2024-02-09T19:18:51.535780848Z" level=info msg="RemovePodSandbox \"6aa47d2f7a898ee3afa289b19ad2247329ed37bac93862496b6bff0980c24ab1\" returns successfully" Feb 9 19:18:51.536875 env[1740]: time="2024-02-09T19:18:51.536830979Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:18:51.537251 env[1740]: time="2024-02-09T19:18:51.537185686Z" level=info msg="TearDown network for sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" successfully" Feb 9 19:18:51.537389 env[1740]: time="2024-02-09T19:18:51.537356488Z" level=info msg="StopPodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" returns successfully" Feb 9 19:18:51.538074 env[1740]: time="2024-02-09T19:18:51.538020739Z" level=info msg="RemovePodSandbox for \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:18:51.538212 env[1740]: time="2024-02-09T19:18:51.538077313Z" level=info msg="Forcibly stopping sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\"" Feb 9 19:18:51.538295 env[1740]: time="2024-02-09T19:18:51.538214579Z" level=info msg="TearDown network for sandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" successfully" Feb 9 19:18:51.543166 env[1740]: time="2024-02-09T19:18:51.543100975Z" level=info msg="RemovePodSandbox \"2380140802b39dadb1c2ebf33d8b64c172bda5cd20aff99ccb90d00c551f8d6f\" returns successfully" Feb 9 19:18:51.544787 env[1740]: time="2024-02-09T19:18:51.544739885Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:18:51.545351 env[1740]: time="2024-02-09T19:18:51.545287772Z" level=info msg="TearDown network for sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" successfully" Feb 9 19:18:51.545488 env[1740]: time="2024-02-09T19:18:51.545455394Z" level=info msg="StopPodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" returns successfully" Feb 9 19:18:51.546196 env[1740]: time="2024-02-09T19:18:51.546157861Z" level=info msg="RemovePodSandbox for \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:18:51.546422 env[1740]: time="2024-02-09T19:18:51.546364360Z" level=info msg="Forcibly stopping sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\"" Feb 9 19:18:51.546697 env[1740]: time="2024-02-09T19:18:51.546644519Z" level=info msg="TearDown network for sandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" successfully" Feb 9 19:18:51.551684 env[1740]: time="2024-02-09T19:18:51.551617426Z" level=info msg="RemovePodSandbox \"463296dc4706a6b3a607af051464639c2acb8f1c4669c00096637b317460b630\" returns successfully" Feb 9 19:18:55.060134 kubelet[2851]: E0209 19:18:55.060079 2851 controller.go:189] failed to update lease, error: Put "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:18:55.481411 systemd[1]: cri-containerd-7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60.scope: Deactivated successfully. Feb 9 19:18:55.482020 systemd[1]: cri-containerd-7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60.scope: Consumed 3.906s CPU time. Feb 9 19:18:55.523216 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60-rootfs.mount: Deactivated successfully. Feb 9 19:18:55.549517 env[1740]: time="2024-02-09T19:18:55.549455670Z" level=info msg="shim disconnected" id=7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60 Feb 9 19:18:55.550441 env[1740]: time="2024-02-09T19:18:55.550402548Z" level=warning msg="cleaning up after shim disconnected" id=7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60 namespace=k8s.io Feb 9 19:18:55.550601 env[1740]: time="2024-02-09T19:18:55.550571708Z" level=info msg="cleaning up dead shim" Feb 9 19:18:55.566344 env[1740]: time="2024-02-09T19:18:55.566288164Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:18:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5900 runtime=io.containerd.runc.v2\n" Feb 9 19:18:55.573517 kubelet[2851]: I0209 19:18:55.573458 2851 scope.go:115] "RemoveContainer" containerID="7d42077011819c3138fa9a3e55f23b45def919926946fad566d668a3a4deae60" Feb 9 19:18:55.577428 env[1740]: time="2024-02-09T19:18:55.577374955Z" level=info msg="CreateContainer within sandbox \"f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:18:55.599300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524693250.mount: Deactivated successfully. Feb 9 19:18:55.610304 env[1740]: time="2024-02-09T19:18:55.610241646Z" level=info msg="CreateContainer within sandbox \"f9ee6085be2df6fa7c4d7799045d94664bc8de8826fa462d8515575ecbc303e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"5dc712a5861f2e7483c8da8e465eca85c01e2ec336bbb07392ea59902752ea27\"" Feb 9 19:18:55.611456 env[1740]: time="2024-02-09T19:18:55.611405859Z" level=info msg="StartContainer for \"5dc712a5861f2e7483c8da8e465eca85c01e2ec336bbb07392ea59902752ea27\"" Feb 9 19:18:55.649281 systemd[1]: Started cri-containerd-5dc712a5861f2e7483c8da8e465eca85c01e2ec336bbb07392ea59902752ea27.scope. Feb 9 19:18:55.730476 env[1740]: time="2024-02-09T19:18:55.730414157Z" level=info msg="StartContainer for \"5dc712a5861f2e7483c8da8e465eca85c01e2ec336bbb07392ea59902752ea27\" returns successfully" Feb 9 19:19:05.062403 kubelet[2851]: E0209 19:19:05.062337 2851 controller.go:189] failed to update lease, error: Put "https://172.31.28.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-78?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)