Jul 2 00:43:29.952908 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 00:43:29.952946 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:43:29.952969 kernel: efi: EFI v2.70 by EDK II Jul 2 00:43:29.952985 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7173cf98 Jul 2 00:43:29.952998 kernel: ACPI: Early table checksum verification disabled Jul 2 00:43:29.953012 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 00:43:29.953027 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 00:43:29.953041 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:43:29.953056 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 00:43:29.953069 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:43:29.953089 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 00:43:29.953103 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 00:43:29.953117 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 00:43:29.953131 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:43:29.953149 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 00:43:29.953168 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 00:43:29.953182 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 00:43:29.953197 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 00:43:29.953211 kernel: printk: bootconsole [uart0] enabled Jul 2 00:43:29.953225 kernel: NUMA: Failed to initialise from firmware Jul 2 00:43:29.953240 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:43:29.953255 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 2 00:43:29.953269 kernel: Zone ranges: Jul 2 00:43:29.953284 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 00:43:29.953299 kernel: DMA32 empty Jul 2 00:43:29.953314 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 00:43:29.953333 kernel: Movable zone start for each node Jul 2 00:43:29.953349 kernel: Early memory node ranges Jul 2 00:43:29.953364 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 00:43:29.953379 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 00:43:29.957137 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 00:43:29.957191 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 00:43:29.957208 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 00:43:29.957223 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 00:43:29.957238 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 00:43:29.957253 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 00:43:29.957267 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:43:29.957282 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 00:43:29.957306 kernel: psci: probing for conduit method from ACPI. Jul 2 00:43:29.957321 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 00:43:29.957342 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:43:29.957357 kernel: psci: Trusted OS migration not required Jul 2 00:43:29.957372 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:43:29.957392 kernel: ACPI: SRAT not present Jul 2 00:43:29.957462 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:43:29.957478 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:43:29.957494 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:43:29.957510 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:43:29.957525 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:43:29.957540 kernel: CPU features: detected: Spectre-v2 Jul 2 00:43:29.957555 kernel: CPU features: detected: Spectre-v3a Jul 2 00:43:29.957571 kernel: CPU features: detected: Spectre-BHB Jul 2 00:43:29.957586 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:43:29.957602 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:43:29.957622 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 00:43:29.957638 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 00:43:29.957653 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 00:43:29.957668 kernel: Policy zone: Normal Jul 2 00:43:29.957686 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:29.957702 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:43:29.957717 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:43:29.957733 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:43:29.957748 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:43:29.957764 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 00:43:29.957784 kernel: Memory: 3824588K/4030464K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 205876K reserved, 0K cma-reserved) Jul 2 00:43:29.957801 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:43:29.957816 kernel: trace event string verifier disabled Jul 2 00:43:29.957831 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:43:29.957847 kernel: rcu: RCU event tracing is enabled. Jul 2 00:43:29.957863 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:43:29.957879 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:43:29.957894 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:43:29.957909 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:43:29.957925 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:43:29.957939 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:43:29.957955 kernel: GICv3: 96 SPIs implemented Jul 2 00:43:29.957974 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:43:29.957989 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:43:29.958004 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:43:29.958019 kernel: GICv3: 16 PPIs implemented Jul 2 00:43:29.958034 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 00:43:29.958049 kernel: ACPI: SRAT not present Jul 2 00:43:29.958063 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 00:43:29.958079 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:43:29.958116 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:43:29.958132 kernel: GICv3: using LPI property table @0x00000004000c0000 Jul 2 00:43:29.958147 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 00:43:29.958167 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 2 00:43:29.958182 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 00:43:29.958198 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 00:43:29.958213 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 00:43:29.958228 kernel: Console: colour dummy device 80x25 Jul 2 00:43:29.958244 kernel: printk: console [tty1] enabled Jul 2 00:43:29.958260 kernel: ACPI: Core revision 20210730 Jul 2 00:43:29.958276 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 00:43:29.958292 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:43:29.958307 kernel: LSM: Security Framework initializing Jul 2 00:43:29.958327 kernel: SELinux: Initializing. Jul 2 00:43:29.958343 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:29.958359 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:43:29.958375 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:43:29.958390 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 00:43:29.958440 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 00:43:29.958457 kernel: Remapping and enabling EFI services. Jul 2 00:43:29.958472 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:43:29.958488 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:43:29.958503 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 00:43:29.958525 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 2 00:43:29.958541 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 00:43:29.966455 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:43:29.966487 kernel: SMP: Total of 2 processors activated. Jul 2 00:43:29.966504 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:43:29.966520 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 00:43:29.966536 kernel: CPU features: detected: CRC32 instructions Jul 2 00:43:29.966551 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:43:29.966566 kernel: alternatives: patching kernel code Jul 2 00:43:29.966591 kernel: devtmpfs: initialized Jul 2 00:43:29.966606 kernel: KASLR disabled due to lack of seed Jul 2 00:43:29.966633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:43:29.966654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:43:29.966670 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:43:29.966686 kernel: SMBIOS 3.0.0 present. Jul 2 00:43:29.966702 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 00:43:29.966718 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:43:29.966734 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:43:29.966750 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:43:29.966767 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:43:29.966787 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:43:29.966803 kernel: audit: type=2000 audit(0.263:1): state=initialized audit_enabled=0 res=1 Jul 2 00:43:29.966820 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:43:29.966836 kernel: cpuidle: using governor menu Jul 2 00:43:29.966852 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:43:29.966872 kernel: ASID allocator initialised with 32768 entries Jul 2 00:43:29.966888 kernel: ACPI: bus type PCI registered Jul 2 00:43:29.966904 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:43:29.966920 kernel: Serial: AMBA PL011 UART driver Jul 2 00:43:29.966936 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:43:29.966952 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:43:29.966968 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:43:29.966984 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:43:29.967000 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:43:29.967020 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:43:29.967036 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:43:29.967052 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:43:29.967068 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:43:29.967084 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:43:29.967100 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:43:29.967116 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:43:29.967131 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:43:29.967147 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:43:29.967167 kernel: ACPI: Interpreter enabled Jul 2 00:43:29.967184 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:43:29.967200 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:43:29.967216 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 00:43:29.967535 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:43:29.967740 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:43:29.967937 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:43:29.968132 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 00:43:29.968333 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 00:43:29.968356 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 00:43:29.968373 kernel: acpiphp: Slot [1] registered Jul 2 00:43:29.968389 kernel: acpiphp: Slot [2] registered Jul 2 00:43:29.968428 kernel: acpiphp: Slot [3] registered Jul 2 00:43:29.968446 kernel: acpiphp: Slot [4] registered Jul 2 00:43:29.968462 kernel: acpiphp: Slot [5] registered Jul 2 00:43:29.968478 kernel: acpiphp: Slot [6] registered Jul 2 00:43:29.968493 kernel: acpiphp: Slot [7] registered Jul 2 00:43:29.968515 kernel: acpiphp: Slot [8] registered Jul 2 00:43:29.968531 kernel: acpiphp: Slot [9] registered Jul 2 00:43:29.968547 kernel: acpiphp: Slot [10] registered Jul 2 00:43:29.968562 kernel: acpiphp: Slot [11] registered Jul 2 00:43:29.968578 kernel: acpiphp: Slot [12] registered Jul 2 00:43:29.968594 kernel: acpiphp: Slot [13] registered Jul 2 00:43:29.968610 kernel: acpiphp: Slot [14] registered Jul 2 00:43:29.968625 kernel: acpiphp: Slot [15] registered Jul 2 00:43:29.968642 kernel: acpiphp: Slot [16] registered Jul 2 00:43:29.968661 kernel: acpiphp: Slot [17] registered Jul 2 00:43:29.968677 kernel: acpiphp: Slot [18] registered Jul 2 00:43:29.968693 kernel: acpiphp: Slot [19] registered Jul 2 00:43:29.968709 kernel: acpiphp: Slot [20] registered Jul 2 00:43:29.968725 kernel: acpiphp: Slot [21] registered Jul 2 00:43:29.968741 kernel: acpiphp: Slot [22] registered Jul 2 00:43:29.968756 kernel: acpiphp: Slot [23] registered Jul 2 00:43:29.968773 kernel: acpiphp: Slot [24] registered Jul 2 00:43:29.968788 kernel: acpiphp: Slot [25] registered Jul 2 00:43:29.968804 kernel: acpiphp: Slot [26] registered Jul 2 00:43:29.968824 kernel: acpiphp: Slot [27] registered Jul 2 00:43:29.968840 kernel: acpiphp: Slot [28] registered Jul 2 00:43:29.968856 kernel: acpiphp: Slot [29] registered Jul 2 00:43:29.968872 kernel: acpiphp: Slot [30] registered Jul 2 00:43:29.968888 kernel: acpiphp: Slot [31] registered Jul 2 00:43:29.968904 kernel: PCI host bridge to bus 0000:00 Jul 2 00:43:29.969106 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 00:43:29.969296 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:43:29.969525 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 00:43:29.969721 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 00:43:29.969959 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 00:43:29.970209 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 00:43:29.970435 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 00:43:29.970671 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:43:29.970887 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 00:43:29.971094 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:43:29.971315 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:43:29.978621 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 00:43:29.978850 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 00:43:29.979052 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 00:43:29.979252 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:43:29.979488 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 00:43:29.979694 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 00:43:29.979899 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 00:43:29.980100 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 00:43:29.984942 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 00:43:29.985157 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 00:43:29.985341 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:43:29.985587 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 00:43:29.985614 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:43:29.985632 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:43:29.985650 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:43:29.985666 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:43:29.985683 kernel: iommu: Default domain type: Translated Jul 2 00:43:29.985699 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:43:29.985716 kernel: vgaarb: loaded Jul 2 00:43:29.985732 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:43:29.985755 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:43:29.985771 kernel: PTP clock support registered Jul 2 00:43:29.998476 kernel: Registered efivars operations Jul 2 00:43:29.998502 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:43:29.998519 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:43:29.998536 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:43:29.998553 kernel: pnp: PnP ACPI init Jul 2 00:43:29.998813 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 00:43:29.998848 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:43:29.998866 kernel: NET: Registered PF_INET protocol family Jul 2 00:43:29.998882 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:43:29.998899 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:43:29.998916 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:43:29.998932 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:43:29.998948 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:43:29.998965 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:43:29.998981 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:29.999001 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:43:29.999018 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:43:29.999034 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:43:29.999050 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 00:43:29.999067 kernel: kvm [1]: HYP mode not available Jul 2 00:43:29.999083 kernel: Initialise system trusted keyrings Jul 2 00:43:29.999100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:43:29.999116 kernel: Key type asymmetric registered Jul 2 00:43:29.999132 kernel: Asymmetric key parser 'x509' registered Jul 2 00:43:29.999152 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:43:29.999169 kernel: io scheduler mq-deadline registered Jul 2 00:43:29.999185 kernel: io scheduler kyber registered Jul 2 00:43:29.999201 kernel: io scheduler bfq registered Jul 2 00:43:29.999424 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 00:43:29.999452 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:43:29.999469 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:43:29.999485 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 00:43:29.999507 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 00:43:29.999524 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:43:29.999541 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 00:43:29.999751 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 00:43:29.999775 kernel: printk: console [ttyS0] disabled Jul 2 00:43:29.999793 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 00:43:29.999810 kernel: printk: console [ttyS0] enabled Jul 2 00:43:29.999826 kernel: printk: bootconsole [uart0] disabled Jul 2 00:43:29.999842 kernel: thunder_xcv, ver 1.0 Jul 2 00:43:29.999858 kernel: thunder_bgx, ver 1.0 Jul 2 00:43:29.999879 kernel: nicpf, ver 1.0 Jul 2 00:43:29.999895 kernel: nicvf, ver 1.0 Jul 2 00:43:30.000100 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:43:30.000290 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:43:29 UTC (1719881009) Jul 2 00:43:30.000313 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:43:30.000330 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:43:30.000347 kernel: Segment Routing with IPv6 Jul 2 00:43:30.000363 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:43:30.000383 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:43:30.000426 kernel: Key type dns_resolver registered Jul 2 00:43:30.000445 kernel: registered taskstats version 1 Jul 2 00:43:30.000461 kernel: Loading compiled-in X.509 certificates Jul 2 00:43:30.000478 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:43:30.000494 kernel: Key type .fscrypt registered Jul 2 00:43:30.000510 kernel: Key type fscrypt-provisioning registered Jul 2 00:43:30.000526 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:43:30.000542 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:43:30.000564 kernel: ima: No architecture policies found Jul 2 00:43:30.000579 kernel: clk: Disabling unused clocks Jul 2 00:43:30.000595 kernel: Freeing unused kernel memory: 36352K Jul 2 00:43:30.000611 kernel: Run /init as init process Jul 2 00:43:30.000627 kernel: with arguments: Jul 2 00:43:30.000643 kernel: /init Jul 2 00:43:30.000658 kernel: with environment: Jul 2 00:43:30.000674 kernel: HOME=/ Jul 2 00:43:30.000690 kernel: TERM=linux Jul 2 00:43:30.000710 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:43:30.000732 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:30.000753 systemd[1]: Detected virtualization amazon. Jul 2 00:43:30.000772 systemd[1]: Detected architecture arm64. Jul 2 00:43:30.000789 systemd[1]: Running in initrd. Jul 2 00:43:30.000807 systemd[1]: No hostname configured, using default hostname. Jul 2 00:43:30.000824 systemd[1]: Hostname set to . Jul 2 00:43:30.000847 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:30.000865 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:43:30.000883 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:30.000900 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:30.000917 systemd[1]: Reached target paths.target. Jul 2 00:43:30.000935 systemd[1]: Reached target slices.target. Jul 2 00:43:30.000952 systemd[1]: Reached target swap.target. Jul 2 00:43:30.000969 systemd[1]: Reached target timers.target. Jul 2 00:43:30.000992 systemd[1]: Listening on iscsid.socket. Jul 2 00:43:30.001009 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:43:30.001027 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:43:30.001045 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:43:30.001062 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:43:30.001080 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:30.001098 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:30.001115 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:30.001137 systemd[1]: Reached target sockets.target. Jul 2 00:43:30.001154 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:30.001172 systemd[1]: Finished network-cleanup.service. Jul 2 00:43:30.001190 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:43:30.001207 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:30.001225 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:30.001242 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:30.001260 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:43:30.001277 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:30.001299 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:43:30.001317 kernel: audit: type=1130 audit(1719881009.957:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.001335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:43:30.001353 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:43:30.001371 kernel: audit: type=1130 audit(1719881009.984:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.001388 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:43:30.001428 systemd-journald[309]: Journal started Jul 2 00:43:30.001520 systemd-journald[309]: Runtime Journal (/run/log/journal/ec28fdfead55f2570c646ef8c690de41) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:43:29.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:29.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:29.953495 systemd-modules-load[310]: Inserted module 'overlay' Jul 2 00:43:30.010607 systemd[1]: Started systemd-journald.service. Jul 2 00:43:30.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.021596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:43:30.023451 kernel: audit: type=1130 audit(1719881010.014:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.041434 kernel: audit: type=1130 audit(1719881010.024:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.047431 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:43:30.057443 kernel: Bridge firewalling registered Jul 2 00:43:30.057616 systemd-modules-load[310]: Inserted module 'br_netfilter' Jul 2 00:43:30.060459 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:43:30.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.064726 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:43:30.072184 kernel: audit: type=1130 audit(1719881010.061:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.081642 systemd-resolved[311]: Positive Trust Anchors: Jul 2 00:43:30.081672 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:30.081727 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:30.098432 kernel: SCSI subsystem initialized Jul 2 00:43:30.106382 dracut-cmdline[326]: dracut-dracut-053 Jul 2 00:43:30.116424 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:43:30.122648 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:43:30.122716 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:43:30.122740 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:43:30.142683 systemd-modules-load[310]: Inserted module 'dm_multipath' Jul 2 00:43:30.146428 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:30.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.163562 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:30.172976 kernel: audit: type=1130 audit(1719881010.160:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.189228 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:30.198416 kernel: audit: type=1130 audit(1719881010.189:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.298442 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:43:30.317434 kernel: iscsi: registered transport (tcp) Jul 2 00:43:30.344705 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:43:30.344778 kernel: QLogic iSCSI HBA Driver Jul 2 00:43:30.510071 systemd-resolved[311]: Defaulting to hostname 'linux'. Jul 2 00:43:30.511765 kernel: random: crng init done Jul 2 00:43:30.512376 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:30.523058 kernel: audit: type=1130 audit(1719881010.512:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.514134 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:30.537857 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:43:30.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.547496 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:43:30.556444 kernel: audit: type=1130 audit(1719881010.537:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:30.616450 kernel: raid6: neonx8 gen() 6418 MB/s Jul 2 00:43:30.634441 kernel: raid6: neonx8 xor() 4760 MB/s Jul 2 00:43:30.652445 kernel: raid6: neonx4 gen() 6565 MB/s Jul 2 00:43:30.670442 kernel: raid6: neonx4 xor() 4987 MB/s Jul 2 00:43:30.688443 kernel: raid6: neonx2 gen() 5805 MB/s Jul 2 00:43:30.706442 kernel: raid6: neonx2 xor() 4560 MB/s Jul 2 00:43:30.724441 kernel: raid6: neonx1 gen() 4485 MB/s Jul 2 00:43:30.742449 kernel: raid6: neonx1 xor() 3686 MB/s Jul 2 00:43:30.760449 kernel: raid6: int64x8 gen() 3442 MB/s Jul 2 00:43:30.778442 kernel: raid6: int64x8 xor() 2085 MB/s Jul 2 00:43:30.796453 kernel: raid6: int64x4 gen() 3836 MB/s Jul 2 00:43:30.814450 kernel: raid6: int64x4 xor() 2191 MB/s Jul 2 00:43:30.832459 kernel: raid6: int64x2 gen() 3594 MB/s Jul 2 00:43:30.850457 kernel: raid6: int64x2 xor() 1935 MB/s Jul 2 00:43:30.868461 kernel: raid6: int64x1 gen() 2734 MB/s Jul 2 00:43:30.887488 kernel: raid6: int64x1 xor() 1437 MB/s Jul 2 00:43:30.887562 kernel: raid6: using algorithm neonx4 gen() 6565 MB/s Jul 2 00:43:30.887587 kernel: raid6: .... xor() 4987 MB/s, rmw enabled Jul 2 00:43:30.889013 kernel: raid6: using neon recovery algorithm Jul 2 00:43:30.908464 kernel: xor: measuring software checksum speed Jul 2 00:43:30.911457 kernel: 8regs : 9408 MB/sec Jul 2 00:43:30.913448 kernel: 32regs : 11151 MB/sec Jul 2 00:43:30.917036 kernel: arm64_neon : 9292 MB/sec Jul 2 00:43:30.917104 kernel: xor: using function: 32regs (11151 MB/sec) Jul 2 00:43:31.009456 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:43:31.028319 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:43:31.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.030000 audit: BPF prog-id=7 op=LOAD Jul 2 00:43:31.030000 audit: BPF prog-id=8 op=LOAD Jul 2 00:43:31.032945 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:31.062148 systemd-udevd[508]: Using default interface naming scheme 'v252'. Jul 2 00:43:31.074557 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:31.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.078617 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:43:31.117484 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Jul 2 00:43:31.187560 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:43:31.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.192689 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:31.308712 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:31.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:31.461320 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:43:31.461387 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 00:43:31.469914 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:43:31.470342 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:43:31.473915 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 00:43:31.473969 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:43:31.479434 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:d5:22:a0:ff:0f Jul 2 00:43:31.484431 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:43:31.490476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:43:31.490529 kernel: GPT:9289727 != 16777215 Jul 2 00:43:31.492349 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:43:31.495007 kernel: GPT:9289727 != 16777215 Jul 2 00:43:31.495037 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:43:31.496324 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:31.500208 (udev-worker)[552]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:43:31.572460 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (571) Jul 2 00:43:31.592134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:43:31.681134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:43:31.685784 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:43:31.699749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:43:31.713213 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:43:31.726255 systemd[1]: Starting disk-uuid.service... Jul 2 00:43:31.736376 disk-uuid[667]: Primary Header is updated. Jul 2 00:43:31.736376 disk-uuid[667]: Secondary Entries is updated. Jul 2 00:43:31.736376 disk-uuid[667]: Secondary Header is updated. Jul 2 00:43:31.745452 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:31.754418 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:32.761471 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:43:32.762379 disk-uuid[668]: The operation has completed successfully. Jul 2 00:43:32.919025 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:43:32.921173 systemd[1]: Finished disk-uuid.service. Jul 2 00:43:32.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:32.952943 systemd[1]: Starting verity-setup.service... Jul 2 00:43:32.985742 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:43:33.072136 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:43:33.076694 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:43:33.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.078433 systemd[1]: Finished verity-setup.service. Jul 2 00:43:33.174428 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:43:33.175863 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:43:33.179032 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:43:33.184759 systemd[1]: Starting ignition-setup.service... Jul 2 00:43:33.189816 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:43:33.216875 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:33.216943 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:33.216968 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:33.232093 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:33.246522 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:43:33.277256 systemd[1]: Finished ignition-setup.service. Jul 2 00:43:33.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.281539 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:43:33.345994 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:43:33.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.348000 audit: BPF prog-id=9 op=LOAD Jul 2 00:43:33.351178 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:33.398876 systemd-networkd[1108]: lo: Link UP Jul 2 00:43:33.398902 systemd-networkd[1108]: lo: Gained carrier Jul 2 00:43:33.403054 systemd-networkd[1108]: Enumeration completed Jul 2 00:43:33.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.403663 systemd-networkd[1108]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:33.403757 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:33.405475 systemd[1]: Reached target network.target. Jul 2 00:43:33.410182 systemd-networkd[1108]: eth0: Link UP Jul 2 00:43:33.410190 systemd-networkd[1108]: eth0: Gained carrier Jul 2 00:43:33.412066 systemd[1]: Starting iscsiuio.service... Jul 2 00:43:33.430255 systemd[1]: Started iscsiuio.service. Jul 2 00:43:33.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.433202 systemd[1]: Starting iscsid.service... Jul 2 00:43:33.435912 systemd-networkd[1108]: eth0: DHCPv4 address 172.31.19.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:43:33.445328 iscsid[1113]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:33.445328 iscsid[1113]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:43:33.445328 iscsid[1113]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:43:33.445328 iscsid[1113]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:43:33.445328 iscsid[1113]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:43:33.463109 iscsid[1113]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:43:33.468863 systemd[1]: Started iscsid.service. Jul 2 00:43:33.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.472957 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:43:33.500028 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:43:33.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:33.503160 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:43:33.506288 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:33.508210 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:33.514301 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:43:33.534020 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:43:33.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.188095 ignition[1060]: Ignition 2.14.0 Jul 2 00:43:34.189233 ignition[1060]: Stage: fetch-offline Jul 2 00:43:34.191323 ignition[1060]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:34.193036 ignition[1060]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:34.207800 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:34.208907 ignition[1060]: Ignition finished successfully Jul 2 00:43:34.213878 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:43:34.225433 kernel: kauditd_printk_skb: 17 callbacks suppressed Jul 2 00:43:34.225543 kernel: audit: type=1130 audit(1719881014.214:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.217212 systemd[1]: Starting ignition-fetch.service... Jul 2 00:43:34.236332 ignition[1132]: Ignition 2.14.0 Jul 2 00:43:34.236364 ignition[1132]: Stage: fetch Jul 2 00:43:34.236771 ignition[1132]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:34.236835 ignition[1132]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:34.252323 ignition[1132]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:34.256010 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:34.261470 ignition[1132]: INFO : PUT result: OK Jul 2 00:43:34.264712 ignition[1132]: DEBUG : parsed url from cmdline: "" Jul 2 00:43:34.264712 ignition[1132]: INFO : no config URL provided Jul 2 00:43:34.264712 ignition[1132]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:43:34.269965 ignition[1132]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 2 00:43:34.269965 ignition[1132]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:34.274280 ignition[1132]: INFO : PUT result: OK Jul 2 00:43:34.274280 ignition[1132]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:43:34.278442 ignition[1132]: INFO : GET result: OK Jul 2 00:43:34.278442 ignition[1132]: DEBUG : parsing config with SHA512: e633fc6e1677f6a713a28b1ff9138e7c4aa7be39ad348509ca98c8e4aabc754c06513ea2c6bb46d91f5b4243d74eaff303868ad990034f8a9509a80e79edcf77 Jul 2 00:43:34.289050 unknown[1132]: fetched base config from "system" Jul 2 00:43:34.289088 unknown[1132]: fetched base config from "system" Jul 2 00:43:34.289103 unknown[1132]: fetched user config from "aws" Jul 2 00:43:34.292287 ignition[1132]: fetch: fetch complete Jul 2 00:43:34.292302 ignition[1132]: fetch: fetch passed Jul 2 00:43:34.292474 ignition[1132]: Ignition finished successfully Jul 2 00:43:34.298315 systemd[1]: Finished ignition-fetch.service. Jul 2 00:43:34.311565 kernel: audit: type=1130 audit(1719881014.299:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.303889 systemd[1]: Starting ignition-kargs.service... Jul 2 00:43:34.325848 ignition[1138]: Ignition 2.14.0 Jul 2 00:43:34.325877 ignition[1138]: Stage: kargs Jul 2 00:43:34.326231 ignition[1138]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:34.326296 ignition[1138]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:34.339770 ignition[1138]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:34.342215 ignition[1138]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:34.349434 ignition[1138]: INFO : PUT result: OK Jul 2 00:43:34.354556 ignition[1138]: kargs: kargs passed Jul 2 00:43:34.355986 ignition[1138]: Ignition finished successfully Jul 2 00:43:34.359136 systemd[1]: Finished ignition-kargs.service. Jul 2 00:43:34.369022 kernel: audit: type=1130 audit(1719881014.359:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.362116 systemd[1]: Starting ignition-disks.service... Jul 2 00:43:34.379200 ignition[1144]: Ignition 2.14.0 Jul 2 00:43:34.379230 ignition[1144]: Stage: disks Jul 2 00:43:34.379616 ignition[1144]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:34.379679 ignition[1144]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:34.396047 ignition[1144]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:34.398252 ignition[1144]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:34.400611 ignition[1144]: INFO : PUT result: OK Jul 2 00:43:34.406079 ignition[1144]: disks: disks passed Jul 2 00:43:34.406997 ignition[1144]: Ignition finished successfully Jul 2 00:43:34.409972 systemd[1]: Finished ignition-disks.service. Jul 2 00:43:34.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.412884 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:43:34.421994 kernel: audit: type=1130 audit(1719881014.411:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.420862 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:34.423470 systemd[1]: Reached target local-fs.target. Jul 2 00:43:34.426099 systemd[1]: Reached target sysinit.target. Jul 2 00:43:34.428570 systemd[1]: Reached target basic.target. Jul 2 00:43:34.431613 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:43:34.469887 systemd-fsck[1152]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:43:34.481271 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:43:34.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.484905 systemd[1]: Mounting sysroot.mount... Jul 2 00:43:34.493451 kernel: audit: type=1130 audit(1719881014.482:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.509438 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:43:34.511858 systemd[1]: Mounted sysroot.mount. Jul 2 00:43:34.512385 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:43:34.525862 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:43:34.528049 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:43:34.528129 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:43:34.528181 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:43:34.544626 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:43:34.560561 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:43:34.565465 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:43:34.583452 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1169) Jul 2 00:43:34.588322 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:34.588389 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:34.588649 initrd-setup-root[1174]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:43:34.593433 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:34.601432 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:34.606253 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:43:34.611116 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:43:34.620612 initrd-setup-root[1208]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:43:34.629110 initrd-setup-root[1216]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:43:34.872224 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:43:34.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.877655 systemd[1]: Starting ignition-mount.service... Jul 2 00:43:34.886004 kernel: audit: type=1130 audit(1719881014.874:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.887635 systemd[1]: Starting sysroot-boot.service... Jul 2 00:43:34.903053 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:34.903222 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:34.934915 systemd[1]: Finished sysroot-boot.service. Jul 2 00:43:34.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.944436 kernel: audit: type=1130 audit(1719881014.935:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.944911 ignition[1236]: INFO : Ignition 2.14.0 Jul 2 00:43:34.946622 ignition[1236]: INFO : Stage: mount Jul 2 00:43:34.948199 ignition[1236]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:34.950503 ignition[1236]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:34.969052 ignition[1236]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:34.971312 ignition[1236]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:34.974230 ignition[1236]: INFO : PUT result: OK Jul 2 00:43:34.979108 ignition[1236]: INFO : mount: mount passed Jul 2 00:43:34.981371 ignition[1236]: INFO : Ignition finished successfully Jul 2 00:43:34.981956 systemd[1]: Finished ignition-mount.service. Jul 2 00:43:34.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:34.989385 systemd[1]: Starting ignition-files.service... Jul 2 00:43:34.997579 kernel: audit: type=1130 audit(1719881014.985:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:35.004329 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:43:35.021438 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1245) Jul 2 00:43:35.026452 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:43:35.026494 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:43:35.026518 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:43:35.034433 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:43:35.039157 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:43:35.057480 ignition[1264]: INFO : Ignition 2.14.0 Jul 2 00:43:35.057480 ignition[1264]: INFO : Stage: files Jul 2 00:43:35.061544 ignition[1264]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:35.061544 ignition[1264]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:35.073718 ignition[1264]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:35.073718 ignition[1264]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:35.078763 ignition[1264]: INFO : PUT result: OK Jul 2 00:43:35.087463 ignition[1264]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:43:35.091316 ignition[1264]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:43:35.093737 ignition[1264]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:43:35.125437 ignition[1264]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:43:35.127879 ignition[1264]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:43:35.130139 ignition[1264]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:43:35.128985 unknown[1264]: wrote ssh authorized keys file for user: core Jul 2 00:43:35.134569 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:43:35.134569 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:43:35.134569 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:43:35.134569 ignition[1264]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:43:35.192719 ignition[1264]: INFO : GET result: OK Jul 2 00:43:35.284917 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:43:35.288952 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:43:35.288952 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:43:35.288952 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:43:35.288952 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:43:35.288952 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:43:35.288952 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:35.316163 ignition[1264]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1819415248" Jul 2 00:43:35.316163 ignition[1264]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1819415248": device or resource busy Jul 2 00:43:35.316163 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1819415248", trying btrfs: device or resource busy Jul 2 00:43:35.316163 ignition[1264]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1819415248" Jul 2 00:43:35.330280 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1267) Jul 2 00:43:35.330317 ignition[1264]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1819415248" Jul 2 00:43:35.332738 ignition[1264]: INFO : op(3): [started] unmounting "/mnt/oem1819415248" Jul 2 00:43:35.334778 ignition[1264]: INFO : op(3): [finished] unmounting "/mnt/oem1819415248" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:43:35.336729 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:43:35.336729 ignition[1264]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:43:35.476631 systemd-networkd[1108]: eth0: Gained IPv6LL Jul 2 00:43:35.800103 ignition[1264]: INFO : GET result: OK Jul 2 00:43:35.944135 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:43:35.947236 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:35.947236 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:43:35.947236 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:43:35.957759 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:43:35.957759 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:43:35.957759 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:35.973302 ignition[1264]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3320992881" Jul 2 00:43:35.973302 ignition[1264]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3320992881": device or resource busy Jul 2 00:43:35.973302 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3320992881", trying btrfs: device or resource busy Jul 2 00:43:35.973302 ignition[1264]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3320992881" Jul 2 00:43:35.985041 ignition[1264]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3320992881" Jul 2 00:43:35.985041 ignition[1264]: INFO : op(6): [started] unmounting "/mnt/oem3320992881" Jul 2 00:43:35.985041 ignition[1264]: INFO : op(6): [finished] unmounting "/mnt/oem3320992881" Jul 2 00:43:35.985041 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:43:35.985041 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:43:35.985041 ignition[1264]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jul 2 00:43:36.351460 ignition[1264]: INFO : GET result: OK Jul 2 00:43:36.860856 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jul 2 00:43:36.864685 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:43:36.867998 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:36.880009 ignition[1264]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem932441001" Jul 2 00:43:36.882537 ignition[1264]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem932441001": device or resource busy Jul 2 00:43:36.882537 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem932441001", trying btrfs: device or resource busy Jul 2 00:43:36.882537 ignition[1264]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem932441001" Jul 2 00:43:36.890854 ignition[1264]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem932441001" Jul 2 00:43:36.890854 ignition[1264]: INFO : op(9): [started] unmounting "/mnt/oem932441001" Jul 2 00:43:36.890854 ignition[1264]: INFO : op(9): [finished] unmounting "/mnt/oem932441001" Jul 2 00:43:36.890854 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:43:36.890854 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:43:36.890854 ignition[1264]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:43:36.918954 ignition[1264]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem230520206" Jul 2 00:43:36.921516 ignition[1264]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem230520206": device or resource busy Jul 2 00:43:36.921516 ignition[1264]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem230520206", trying btrfs: device or resource busy Jul 2 00:43:36.921516 ignition[1264]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem230520206" Jul 2 00:43:36.938244 ignition[1264]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem230520206" Jul 2 00:43:36.938244 ignition[1264]: INFO : op(c): [started] unmounting "/mnt/oem230520206" Jul 2 00:43:36.938244 ignition[1264]: INFO : op(c): [finished] unmounting "/mnt/oem230520206" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(13): [started] processing unit "nvidia.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(13): [finished] processing unit "nvidia.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(15): [started] processing unit "containerd.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(15): [finished] processing unit "containerd.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:43:36.938244 ignition[1264]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Jul 2 00:43:36.934458 systemd[1]: mnt-oem230520206.mount: Deactivated successfully. Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1b): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1b): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:43:37.008254 ignition[1264]: INFO : files: files passed Jul 2 00:43:37.008254 ignition[1264]: INFO : Ignition finished successfully Jul 2 00:43:37.063860 kernel: audit: type=1130 audit(1719881017.018:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.016273 systemd[1]: Finished ignition-files.service. Jul 2 00:43:37.030543 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:43:37.041117 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:43:37.045566 systemd[1]: Starting ignition-quench.service... Jul 2 00:43:37.059817 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:43:37.060022 systemd[1]: Finished ignition-quench.service. Jul 2 00:43:37.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.078394 initrd-setup-root-after-ignition[1289]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:43:37.085884 kernel: audit: type=1130 audit(1719881017.075:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.086563 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:43:37.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.090244 systemd[1]: Reached target ignition-complete.target. Jul 2 00:43:37.094504 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:43:37.127006 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:43:37.127379 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:43:37.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.132157 systemd[1]: Reached target initrd-fs.target. Jul 2 00:43:37.134783 systemd[1]: Reached target initrd.target. Jul 2 00:43:37.142058 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:43:37.145761 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:43:37.170222 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:43:37.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.174502 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:43:37.194947 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:43:37.198135 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:43:37.201416 systemd[1]: Stopped target timers.target. Jul 2 00:43:37.204307 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:43:37.206371 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:43:37.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.209590 systemd[1]: Stopped target initrd.target. Jul 2 00:43:37.212350 systemd[1]: Stopped target basic.target. Jul 2 00:43:37.215050 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:43:37.218205 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:43:37.221312 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:43:37.224618 systemd[1]: Stopped target remote-fs.target. Jul 2 00:43:37.227460 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:43:37.230530 systemd[1]: Stopped target sysinit.target. Jul 2 00:43:37.233246 systemd[1]: Stopped target local-fs.target. Jul 2 00:43:37.236182 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:43:37.239259 systemd[1]: Stopped target swap.target. Jul 2 00:43:37.241834 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:43:37.243730 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:43:37.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.246876 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:43:37.249772 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:43:37.251660 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:43:37.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.254704 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:43:37.256736 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:43:37.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.260128 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:43:37.261855 systemd[1]: Stopped ignition-files.service. Jul 2 00:43:37.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.266197 systemd[1]: Stopping ignition-mount.service... Jul 2 00:43:37.290375 ignition[1302]: INFO : Ignition 2.14.0 Jul 2 00:43:37.290375 ignition[1302]: INFO : Stage: umount Jul 2 00:43:37.290375 ignition[1302]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:43:37.290375 ignition[1302]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:43:37.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.321357 iscsid[1113]: iscsid shutting down. Jul 2 00:43:37.284926 systemd[1]: Stopping iscsid.service... Jul 2 00:43:37.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.328638 ignition[1302]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:43:37.328638 ignition[1302]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:43:37.328638 ignition[1302]: INFO : PUT result: OK Jul 2 00:43:37.300265 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:43:37.337179 ignition[1302]: INFO : umount: umount passed Jul 2 00:43:37.337179 ignition[1302]: INFO : Ignition finished successfully Jul 2 00:43:37.310020 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:43:37.310358 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:43:37.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.314424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:43:37.314671 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:43:37.334628 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:43:37.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.339944 systemd[1]: Stopped iscsid.service. Jul 2 00:43:37.350721 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:43:37.353512 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:43:37.353725 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:43:37.365153 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:43:37.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.365355 systemd[1]: Stopped ignition-mount.service. Jul 2 00:43:37.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.368256 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:43:37.368483 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:43:37.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.371272 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:43:37.371370 systemd[1]: Stopped ignition-disks.service. Jul 2 00:43:37.372968 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:43:37.373068 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:43:37.375785 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:43:37.375945 systemd[1]: Stopped ignition-fetch.service. Jul 2 00:43:37.379992 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:43:37.380074 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:43:37.381692 systemd[1]: Stopped target paths.target. Jul 2 00:43:37.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.383015 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:43:37.388531 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:43:37.388996 systemd[1]: Stopped target slices.target. Jul 2 00:43:37.392022 systemd[1]: Stopped target sockets.target. Jul 2 00:43:37.394619 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:43:37.394700 systemd[1]: Closed iscsid.socket. Jul 2 00:43:37.397725 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:43:37.397816 systemd[1]: Stopped ignition-setup.service. Jul 2 00:43:37.401765 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:43:37.401857 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:43:37.404502 systemd[1]: Stopping iscsiuio.service... Jul 2 00:43:37.424930 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:43:37.426337 systemd[1]: Stopped iscsiuio.service. Jul 2 00:43:37.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.430457 systemd[1]: Stopped target network.target. Jul 2 00:43:37.433125 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:43:37.433232 systemd[1]: Closed iscsiuio.socket. Jul 2 00:43:37.437551 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:43:37.437873 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:43:37.444479 systemd-networkd[1108]: eth0: DHCPv6 lease lost Jul 2 00:43:37.448857 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:43:37.450719 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:43:37.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.454305 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:43:37.456118 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:43:37.457000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:43:37.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.457000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:43:37.459543 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:43:37.459630 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:43:37.465427 systemd[1]: Stopping network-cleanup.service... Jul 2 00:43:37.469521 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:43:37.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.470795 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:43:37.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.472983 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:43:37.473205 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:43:37.476011 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:43:37.476112 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:43:37.484626 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:43:37.489169 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:43:37.502384 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:43:37.502735 systemd[1]: Stopped network-cleanup.service. Jul 2 00:43:37.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.512924 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:43:37.514837 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:43:37.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.518143 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:43:37.518267 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:43:37.523248 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:43:37.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.523372 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:43:37.524786 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:43:37.524908 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:43:37.525450 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:43:37.525566 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:43:37.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.526305 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:43:37.526449 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:43:37.528925 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:43:37.541690 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:43:37.541846 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 00:43:37.544028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:43:37.544149 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:43:37.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.548003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:43:37.548712 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:43:37.571689 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:43:37.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:37.571935 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:43:37.573987 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:43:37.585020 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:43:37.620271 systemd[1]: Switching root. Jul 2 00:43:37.621000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:43:37.621000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:43:37.628000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:43:37.628000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:43:37.628000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:43:37.653166 systemd-journald[309]: Journal stopped Jul 2 00:43:43.393196 systemd-journald[309]: Received SIGTERM from PID 1 (systemd). Jul 2 00:43:43.393345 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:43:43.393393 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:43:43.398537 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:43:43.398574 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:43:43.398608 kernel: SELinux: policy capability open_perms=1 Jul 2 00:43:43.398641 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:43:43.398673 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:43:43.398707 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:43:43.398749 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:43:43.398781 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:43:43.398811 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:43:43.398846 systemd[1]: Successfully loaded SELinux policy in 114.375ms. Jul 2 00:43:43.398906 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.065ms. Jul 2 00:43:43.398947 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:43:43.398982 systemd[1]: Detected virtualization amazon. Jul 2 00:43:43.399018 systemd[1]: Detected architecture arm64. Jul 2 00:43:43.399051 systemd[1]: Detected first boot. Jul 2 00:43:43.399087 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:43:43.399129 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:43:43.399164 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:43:43.399197 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:43.399233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:43.399269 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:43.399306 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:43:43.399337 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:43:43.399372 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:43:43.399442 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 00:43:43.399477 systemd[1]: Created slice system-getty.slice. Jul 2 00:43:43.399514 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:43:43.399548 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:43:43.399581 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:43:43.399615 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:43:43.399654 systemd[1]: Created slice user.slice. Jul 2 00:43:43.399685 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:43:43.399717 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:43:43.399749 systemd[1]: Set up automount boot.automount. Jul 2 00:43:43.399782 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:43:43.399817 systemd[1]: Reached target integritysetup.target. Jul 2 00:43:43.399849 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:43:43.399892 systemd[1]: Reached target remote-fs.target. Jul 2 00:43:43.399927 systemd[1]: Reached target slices.target. Jul 2 00:43:43.399958 systemd[1]: Reached target swap.target. Jul 2 00:43:43.399989 systemd[1]: Reached target torcx.target. Jul 2 00:43:43.400023 systemd[1]: Reached target veritysetup.target. Jul 2 00:43:43.400054 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:43:43.400084 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:43:43.400125 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 2 00:43:43.400157 kernel: audit: type=1400 audit(1719881023.023:88): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:43.400190 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:43:43.400227 kernel: audit: type=1335 audit(1719881023.023:89): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 00:43:43.400258 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:43:43.400288 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:43:43.400319 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:43:43.400350 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:43:43.400380 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:43:43.400446 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:43:43.400485 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:43:43.400524 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:43:43.400560 systemd[1]: Mounting media.mount... Jul 2 00:43:43.400593 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:43:43.400633 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:43:43.400669 systemd[1]: Mounting tmp.mount... Jul 2 00:43:43.400702 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:43:43.400733 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:43.400764 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:43:43.400797 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:43:43.400832 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:43.400865 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:43.400896 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:43.400929 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:43:43.400959 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:43.400991 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:43:43.401022 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:43:43.401054 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:43:43.401093 systemd[1]: Starting systemd-journald.service... Jul 2 00:43:43.401135 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:43:43.401166 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:43:43.401199 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:43:43.401232 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:43:43.401267 kernel: loop: module loaded Jul 2 00:43:43.401301 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:43:43.401336 kernel: fuse: init (API version 7.34) Jul 2 00:43:43.401366 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:43:43.401427 systemd[1]: Mounted media.mount. Jul 2 00:43:43.401477 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:43:43.401511 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:43:43.401544 systemd[1]: Mounted tmp.mount. Jul 2 00:43:43.401578 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:43:43.401610 kernel: audit: type=1130 audit(1719881023.326:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.401642 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:43:43.401673 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:43:43.401707 kernel: audit: type=1130 audit(1719881023.342:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.401742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:43.401774 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:43.401809 kernel: audit: type=1131 audit(1719881023.349:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.401839 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:43.401875 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:43.401905 kernel: audit: type=1130 audit(1719881023.365:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.401935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:43.401968 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:43.402034 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:43.402071 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:43.402108 kernel: audit: type=1131 audit(1719881023.365:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.402154 systemd-journald[1453]: Journal started Jul 2 00:43:43.402283 systemd-journald[1453]: Runtime Journal (/run/log/journal/ec28fdfead55f2570c646ef8c690de41) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:43:43.023000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:43:43.411633 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:43:43.411697 kernel: audit: type=1305 audit(1719881023.369:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:43:43.023000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 00:43:43.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.369000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:43:43.422147 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:43:43.422240 systemd[1]: Started systemd-journald.service. Jul 2 00:43:43.369000 audit[1453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1177840 a2=4000 a3=1 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:43.438503 kernel: audit: type=1300 audit(1719881023.369:95): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc1177840 a2=4000 a3=1 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:43.443539 kernel: audit: type=1327 audit(1719881023.369:95): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:43:43.369000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:43:43.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.451153 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:43:43.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.455129 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:43:43.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.459744 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:43:43.462454 systemd[1]: Reached target network-pre.target. Jul 2 00:43:43.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.468498 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:43:43.479687 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:43:43.481753 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:43:43.495880 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:43:43.505993 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:43:43.507859 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:43.511351 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:43:43.513746 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:43.517178 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:43:43.523660 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:43:43.527860 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:43:43.558443 systemd-journald[1453]: Time spent on flushing to /var/log/journal/ec28fdfead55f2570c646ef8c690de41 is 102.727ms for 1079 entries. Jul 2 00:43:43.558443 systemd-journald[1453]: System Journal (/var/log/journal/ec28fdfead55f2570c646ef8c690de41) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:43:43.699032 systemd-journald[1453]: Received client request to flush runtime journal. Jul 2 00:43:43.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.578590 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:43:43.582580 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:43:43.586646 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:43:43.654889 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:43:43.659502 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:43:43.702730 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:43:43.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.712717 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:43:43.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.717655 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:43:43.735566 udevadm[1506]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:43:43.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:43.817443 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:43:43.822356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:43:43.916654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:43:43.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.548768 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:43:44.553204 systemd[1]: Starting systemd-udevd.service... Jul 2 00:43:44.600108 systemd-udevd[1512]: Using default interface naming scheme 'v252'. Jul 2 00:43:44.646857 systemd[1]: Started systemd-udevd.service. Jul 2 00:43:44.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.652202 systemd[1]: Starting systemd-networkd.service... Jul 2 00:43:44.670151 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:43:44.742000 systemd[1]: Found device dev-ttyS0.device. Jul 2 00:43:44.798154 (udev-worker)[1529]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:43:44.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.809382 systemd[1]: Started systemd-userdbd.service. Jul 2 00:43:44.992713 systemd-networkd[1518]: lo: Link UP Jul 2 00:43:44.992739 systemd-networkd[1518]: lo: Gained carrier Jul 2 00:43:44.993703 systemd-networkd[1518]: Enumeration completed Jul 2 00:43:44.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:44.993927 systemd[1]: Started systemd-networkd.service. Jul 2 00:43:44.993928 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:43:44.998271 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:43:45.003449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:43:45.003604 systemd-networkd[1518]: eth0: Link UP Jul 2 00:43:45.003943 systemd-networkd[1518]: eth0: Gained carrier Jul 2 00:43:45.023730 systemd-networkd[1518]: eth0: DHCPv4 address 172.31.19.36/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:43:45.057441 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1537) Jul 2 00:43:45.211584 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:43:45.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.225272 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Jul 2 00:43:45.228346 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:43:45.261435 lvm[1632]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:45.303276 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:43:45.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.306067 systemd[1]: Reached target cryptsetup.target. Jul 2 00:43:45.310967 systemd[1]: Starting lvm2-activation.service... Jul 2 00:43:45.321230 lvm[1634]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:43:45.360471 systemd[1]: Finished lvm2-activation.service. Jul 2 00:43:45.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.362355 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:43:45.364097 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:43:45.364163 systemd[1]: Reached target local-fs.target. Jul 2 00:43:45.365770 systemd[1]: Reached target machines.target. Jul 2 00:43:45.370325 systemd[1]: Starting ldconfig.service... Jul 2 00:43:45.373205 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:45.373447 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:45.377750 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:43:45.382099 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:43:45.393040 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:43:45.399545 systemd[1]: Starting systemd-sysext.service... Jul 2 00:43:45.403311 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1637 (bootctl) Jul 2 00:43:45.406229 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:43:45.439169 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:43:45.451017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:43:45.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.455997 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:43:45.456655 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:43:45.486481 kernel: loop0: detected capacity change from 0 to 193208 Jul 2 00:43:45.551481 systemd-fsck[1649]: fsck.fat 4.2 (2021-01-31) Jul 2 00:43:45.551481 systemd-fsck[1649]: /dev/nvme0n1p1: 236 files, 117047/258078 clusters Jul 2 00:43:45.554155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:43:45.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.559289 systemd[1]: Mounting boot.mount... Jul 2 00:43:45.588457 systemd[1]: Mounted boot.mount. Jul 2 00:43:45.615525 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:43:45.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.764450 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:43:45.789494 kernel: loop1: detected capacity change from 0 to 193208 Jul 2 00:43:45.806310 (sd-sysext)[1669]: Using extensions 'kubernetes'. Jul 2 00:43:45.809792 (sd-sysext)[1669]: Merged extensions into '/usr'. Jul 2 00:43:45.855094 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:43:45.857713 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:45.861329 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:45.866536 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:45.871587 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:45.876042 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:45.876387 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:45.888120 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:43:45.891263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:45.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.897881 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:45.900769 systemd[1]: Finished systemd-sysext.service. Jul 2 00:43:45.903642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:45.904170 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:45.918971 systemd[1]: Starting ensure-sysext.service... Jul 2 00:43:45.920794 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:45.925647 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:43:45.928822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:45.931977 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:45.944013 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:45.958166 systemd[1]: Reloading. Jul 2 00:43:45.975772 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:43:45.978759 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:43:45.996180 systemd-tmpfiles[1682]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:43:46.131447 /usr/lib/systemd/system-generators/torcx-generator[1705]: time="2024-07-02T00:43:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:43:46.131513 /usr/lib/systemd/system-generators/torcx-generator[1705]: time="2024-07-02T00:43:46Z" level=info msg="torcx already run" Jul 2 00:43:46.423558 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:43:46.423604 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:43:46.477889 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:43:46.500506 ldconfig[1636]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:43:46.662024 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:43:46.665698 systemd[1]: Finished ldconfig.service. Jul 2 00:43:46.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.668072 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:43:46.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.672602 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:43:46.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.680679 systemd[1]: Starting audit-rules.service... Jul 2 00:43:46.688687 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:43:46.694049 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:43:46.700146 systemd[1]: Starting systemd-resolved.service... Jul 2 00:43:46.707048 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:43:46.711626 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:43:46.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.715050 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:43:46.730195 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.737621 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:46.740639 systemd-networkd[1518]: eth0: Gained IPv6LL Jul 2 00:43:46.743433 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:46.753748 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:46.755494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.755841 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:46.756093 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:46.758785 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:43:46.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.761834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:46.762295 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:46.776128 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.775000 audit[1775]: SYSTEM_BOOT pid=1775 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.785045 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:43:46.788168 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.788556 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:46.788834 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:46.793052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:46.793535 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:46.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.798969 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:46.799762 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:46.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.806645 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:46.817237 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.820873 systemd[1]: Starting modprobe@drm.service... Jul 2 00:43:46.835799 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:43:46.840712 systemd[1]: Starting modprobe@loop.service... Jul 2 00:43:46.842956 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.844045 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:46.844655 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:43:46.849286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:43:46.849792 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:43:46.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.857459 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:43:46.857885 systemd[1]: Finished modprobe@drm.service. Jul 2 00:43:46.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.861095 systemd[1]: Finished ensure-sysext.service. Jul 2 00:43:46.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.872974 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:43:46.873454 systemd[1]: Finished modprobe@loop.service. Jul 2 00:43:46.875854 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:43:46.876254 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:43:46.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.879060 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:43:46.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.881044 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:43:46.881150 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:43:46.917051 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:43:46.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:46.922511 systemd[1]: Starting systemd-update-done.service... Jul 2 00:43:46.951164 systemd[1]: Finished systemd-update-done.service. Jul 2 00:43:46.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:43:47.018000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:43:47.018000 audit[1811]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe8714570 a2=420 a3=0 items=0 ppid=1768 pid=1811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:43:47.018000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:43:47.020514 augenrules[1811]: No rules Jul 2 00:43:47.022323 systemd[1]: Finished audit-rules.service. Jul 2 00:43:47.055612 systemd-resolved[1771]: Positive Trust Anchors: Jul 2 00:43:47.055650 systemd-resolved[1771]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:43:47.055705 systemd-resolved[1771]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:43:47.069766 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:43:47.072029 systemd[1]: Reached target time-set.target. Jul 2 00:43:47.086188 systemd-resolved[1771]: Defaulting to hostname 'linux'. Jul 2 00:43:47.089562 systemd[1]: Started systemd-resolved.service. Jul 2 00:43:47.091297 systemd[1]: Reached target network.target. Jul 2 00:43:47.092859 systemd[1]: Reached target network-online.target. Jul 2 00:43:47.094622 systemd[1]: Reached target nss-lookup.target. Jul 2 00:43:47.096257 systemd[1]: Reached target sysinit.target. Jul 2 00:43:47.097991 systemd[1]: Started motdgen.path. Jul 2 00:43:47.099515 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:43:47.101956 systemd[1]: Started logrotate.timer. Jul 2 00:43:47.103650 systemd[1]: Started mdadm.timer. Jul 2 00:43:47.105022 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:43:47.106728 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:43:47.106791 systemd[1]: Reached target paths.target. Jul 2 00:43:47.108171 systemd[1]: Reached target timers.target. Jul 2 00:43:47.110755 systemd[1]: Listening on dbus.socket. Jul 2 00:43:47.115009 systemd[1]: Starting docker.socket... Jul 2 00:43:47.118941 systemd[1]: Listening on sshd.socket. Jul 2 00:43:47.120649 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:47.121290 systemd[1]: Listening on docker.socket. Jul 2 00:43:47.122865 systemd[1]: Reached target sockets.target. Jul 2 00:43:47.124540 systemd[1]: Reached target basic.target. Jul 2 00:43:47.126467 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:43:47.126581 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:47.126638 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:43:47.129515 systemd[1]: Started amazon-ssm-agent.service. Jul 2 00:43:47.134236 systemd[1]: Starting containerd.service... Jul 2 00:43:47.139340 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 00:43:47.144026 systemd[1]: Starting dbus.service... Jul 2 00:43:47.149939 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:43:47.156374 systemd[1]: Starting extend-filesystems.service... Jul 2 00:43:47.158094 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:43:47.164601 systemd[1]: Starting kubelet.service... Jul 2 00:43:47.179151 systemd[1]: Starting motdgen.service... Jul 2 00:43:47.201347 systemd[1]: Started nvidia.service. Jul 2 00:43:47.220744 systemd[1]: Starting prepare-helm.service... Jul 2 00:43:47.225362 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:43:47.240683 systemd[1]: Starting sshd-keygen.service... Jul 2 00:43:47.349541 jq[1825]: false Jul 2 00:43:47.261281 systemd[1]: Starting systemd-logind.service... Jul 2 00:43:47.263836 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:43:47.263994 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:43:47.279935 systemd[1]: Starting update-engine.service... Jul 2 00:43:47.386699 jq[1840]: true Jul 2 00:43:47.287379 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:43:47.327230 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:43:47.327949 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:43:47.413706 tar[1843]: linux-arm64/helm Jul 2 00:43:47.416074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:43:47.416668 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:43:47.461188 extend-filesystems[1826]: Found loop1 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p1 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p2 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p3 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found usr Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p4 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p6 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p7 Jul 2 00:43:47.461188 extend-filesystems[1826]: Found nvme0n1p9 Jul 2 00:43:47.461188 extend-filesystems[1826]: Checking size of /dev/nvme0n1p9 Jul 2 00:43:47.532855 extend-filesystems[1826]: Resized partition /dev/nvme0n1p9 Jul 2 00:43:47.534698 amazon-ssm-agent[1820]: 2024/07/02 00:43:47 Failed to load instance info from vault. RegistrationKey does not exist. Jul 2 00:43:47.493383 systemd-timesyncd[1773]: Contacted time server 44.4.53.4:123 (0.flatcar.pool.ntp.org). Jul 2 00:43:47.547295 jq[1856]: true Jul 2 00:43:47.554018 amazon-ssm-agent[1820]: Initializing new seelog logger Jul 2 00:43:47.554018 amazon-ssm-agent[1820]: New Seelog Logger Creation Complete Jul 2 00:43:47.554018 amazon-ssm-agent[1820]: 2024/07/02 00:43:47 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:43:47.554018 amazon-ssm-agent[1820]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:43:47.554018 amazon-ssm-agent[1820]: 2024/07/02 00:43:47 processing appconfig overrides Jul 2 00:43:47.562552 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:43:47.562645 extend-filesystems[1879]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:43:47.493636 systemd-timesyncd[1773]: Initial clock synchronization to Tue 2024-07-02 00:43:47.739297 UTC. Jul 2 00:43:47.593387 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:43:47.594003 systemd[1]: Finished motdgen.service. Jul 2 00:43:47.634440 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:43:47.642244 dbus-daemon[1823]: [system] SELinux support is enabled Jul 2 00:43:47.642635 systemd[1]: Started dbus.service. Jul 2 00:43:47.647646 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:43:47.647697 systemd[1]: Reached target system-config.target. Jul 2 00:43:47.649416 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:43:47.649483 systemd[1]: Reached target user-config.target. Jul 2 00:43:47.662543 extend-filesystems[1879]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:43:47.662543 extend-filesystems[1879]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:43:47.662543 extend-filesystems[1879]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:43:47.669581 extend-filesystems[1826]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:43:47.682832 bash[1900]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:43:47.684513 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:43:47.685089 systemd[1]: Finished extend-filesystems.service. Jul 2 00:43:47.687650 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:43:47.707254 dbus-daemon[1823]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1518 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:43:47.714575 systemd[1]: Starting systemd-hostnamed.service... Jul 2 00:43:47.762112 update_engine[1839]: I0702 00:43:47.761721 1839 main.cc:92] Flatcar Update Engine starting Jul 2 00:43:47.770440 systemd[1]: Started update-engine.service. Jul 2 00:43:47.775475 systemd[1]: Started locksmithd.service. Jul 2 00:43:47.780706 update_engine[1839]: I0702 00:43:47.777938 1839 update_check_scheduler.cc:74] Next update check in 8m55s Jul 2 00:43:47.865135 env[1854]: time="2024-07-02T00:43:47.865049967Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:43:47.939867 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 00:43:48.050578 systemd-logind[1838]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:43:48.055628 systemd-logind[1838]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 00:43:48.056203 systemd-logind[1838]: New seat seat0. Jul 2 00:43:48.061672 env[1854]: time="2024-07-02T00:43:48.061526003Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:43:48.062306 env[1854]: time="2024-07-02T00:43:48.062253088Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.063566 systemd[1]: Started systemd-logind.service. Jul 2 00:43:48.073981 env[1854]: time="2024-07-02T00:43:48.073908444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:48.074191 env[1854]: time="2024-07-02T00:43:48.074154541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.074852 env[1854]: time="2024-07-02T00:43:48.074800138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:48.075564 env[1854]: time="2024-07-02T00:43:48.075505738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.075789 env[1854]: time="2024-07-02T00:43:48.075739565Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:43:48.075934 env[1854]: time="2024-07-02T00:43:48.075901329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.079828 env[1854]: time="2024-07-02T00:43:48.079771386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.082305 env[1854]: time="2024-07-02T00:43:48.082249824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:43:48.083841 env[1854]: time="2024-07-02T00:43:48.083777368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:43:48.084042 env[1854]: time="2024-07-02T00:43:48.084007855Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:43:48.084327 env[1854]: time="2024-07-02T00:43:48.084286409Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:43:48.084546 env[1854]: time="2024-07-02T00:43:48.084511838Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:43:48.094041 env[1854]: time="2024-07-02T00:43:48.093929621Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:43:48.094310 env[1854]: time="2024-07-02T00:43:48.094259198Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:43:48.094530 env[1854]: time="2024-07-02T00:43:48.094491565Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:43:48.094881 env[1854]: time="2024-07-02T00:43:48.094842380Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.095056 env[1854]: time="2024-07-02T00:43:48.095025703Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.095210 env[1854]: time="2024-07-02T00:43:48.095178475Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.095368 env[1854]: time="2024-07-02T00:43:48.095336640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.096290 env[1854]: time="2024-07-02T00:43:48.096214740Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.096570 env[1854]: time="2024-07-02T00:43:48.096521150Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.096755 env[1854]: time="2024-07-02T00:43:48.096724091Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.096931 env[1854]: time="2024-07-02T00:43:48.096901193Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.097079 env[1854]: time="2024-07-02T00:43:48.097049536Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:43:48.097536 env[1854]: time="2024-07-02T00:43:48.097479304Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:43:48.097952 env[1854]: time="2024-07-02T00:43:48.097896392Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:43:48.099164 env[1854]: time="2024-07-02T00:43:48.099088831Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:43:48.103661 env[1854]: time="2024-07-02T00:43:48.103589914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.103919 env[1854]: time="2024-07-02T00:43:48.103866180Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:43:48.104370 env[1854]: time="2024-07-02T00:43:48.104174234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.104629 env[1854]: time="2024-07-02T00:43:48.104593451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.104785 env[1854]: time="2024-07-02T00:43:48.104754559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.104940 env[1854]: time="2024-07-02T00:43:48.104906849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.105128 env[1854]: time="2024-07-02T00:43:48.105079188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.105313 env[1854]: time="2024-07-02T00:43:48.105280472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.105478 env[1854]: time="2024-07-02T00:43:48.105447740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.105632 env[1854]: time="2024-07-02T00:43:48.105601081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.105835 env[1854]: time="2024-07-02T00:43:48.105793026Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:43:48.106364 env[1854]: time="2024-07-02T00:43:48.106323862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.106568 env[1854]: time="2024-07-02T00:43:48.106536909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.107438 env[1854]: time="2024-07-02T00:43:48.107330095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.107694 env[1854]: time="2024-07-02T00:43:48.107636517Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:43:48.113648 env[1854]: time="2024-07-02T00:43:48.113546239Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:43:48.113860 env[1854]: time="2024-07-02T00:43:48.113828034Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:43:48.114073 env[1854]: time="2024-07-02T00:43:48.114040895Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:43:48.114324 env[1854]: time="2024-07-02T00:43:48.114291371Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:43:48.120376 env[1854]: time="2024-07-02T00:43:48.120175712Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:43:48.121622 env[1854]: time="2024-07-02T00:43:48.120919669Z" level=info msg="Connect containerd service" Jul 2 00:43:48.121622 env[1854]: time="2024-07-02T00:43:48.121092343Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:43:48.124517 env[1854]: time="2024-07-02T00:43:48.124454187Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:43:48.127170 env[1854]: time="2024-07-02T00:43:48.127091507Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:43:48.131006 env[1854]: time="2024-07-02T00:43:48.130920671Z" level=info msg="Start subscribing containerd event" Jul 2 00:43:48.131169 env[1854]: time="2024-07-02T00:43:48.131027294Z" level=info msg="Start recovering state" Jul 2 00:43:48.131169 env[1854]: time="2024-07-02T00:43:48.131152816Z" level=info msg="Start event monitor" Jul 2 00:43:48.131288 env[1854]: time="2024-07-02T00:43:48.131198174Z" level=info msg="Start snapshots syncer" Jul 2 00:43:48.131288 env[1854]: time="2024-07-02T00:43:48.131223840Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:43:48.131288 env[1854]: time="2024-07-02T00:43:48.131244546Z" level=info msg="Start streaming server" Jul 2 00:43:48.132841 env[1854]: time="2024-07-02T00:43:48.130962566Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:43:48.132841 env[1854]: time="2024-07-02T00:43:48.131556954Z" level=info msg="containerd successfully booted in 0.293114s" Jul 2 00:43:48.131743 systemd[1]: Started containerd.service. Jul 2 00:43:48.265502 dbus-daemon[1823]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:43:48.265778 systemd[1]: Started systemd-hostnamed.service. Jul 2 00:43:48.270763 dbus-daemon[1823]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1915 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:43:48.276100 systemd[1]: Starting polkit.service... Jul 2 00:43:48.313307 polkitd[1943]: Started polkitd version 121 Jul 2 00:43:48.345267 coreos-metadata[1822]: Jul 02 00:43:48.344 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:43:48.349640 coreos-metadata[1822]: Jul 02 00:43:48.349 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 2 00:43:48.351480 polkitd[1943]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:43:48.351594 polkitd[1943]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:43:48.351767 coreos-metadata[1822]: Jul 02 00:43:48.351 INFO Fetch successful Jul 2 00:43:48.352039 coreos-metadata[1822]: Jul 02 00:43:48.351 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:43:48.353404 coreos-metadata[1822]: Jul 02 00:43:48.353 INFO Fetch successful Jul 2 00:43:48.360238 polkitd[1943]: Finished loading, compiling and executing 2 rules Jul 2 00:43:48.361171 dbus-daemon[1823]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:43:48.363894 polkitd[1943]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:43:48.361526 systemd[1]: Started polkit.service. Jul 2 00:43:48.368174 unknown[1822]: wrote ssh authorized keys file for user: core Jul 2 00:43:48.413522 update-ssh-keys[1962]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:43:48.415133 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 00:43:48.461075 systemd-resolved[1771]: System hostname changed to 'ip-172-31-19-36'. Jul 2 00:43:48.461084 systemd-hostnamed[1915]: Hostname set to (transient) Jul 2 00:43:48.525095 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Create new startup processor Jul 2 00:43:48.527281 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [LongRunningPluginsManager] registered plugins: {} Jul 2 00:43:48.527281 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing bookkeeping folders Jul 2 00:43:48.527505 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO removing the completed state files Jul 2 00:43:48.527505 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing bookkeeping folders for long running plugins Jul 2 00:43:48.527505 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 2 00:43:48.542492 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing healthcheck folders for long running plugins Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing locations for inventory plugin Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing default location for custom inventory Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing default location for file inventory Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Initializing default location for role inventory Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Init the cloudwatchlogs publisher Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:softwareInventory Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:configureDocker Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:runDockerAction Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:configurePackage Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:downloadContent Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:refreshAssociation Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform independent plugin aws:runDocument Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Successfully loaded platform dependent plugin aws:runShellScript Jul 2 00:43:48.543114 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 2 00:43:48.544017 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO OS: linux, Arch: arm64 Jul 2 00:43:48.549046 amazon-ssm-agent[1820]: datastore file /var/lib/amazon/ssm/i-0a45af23dc34dccda/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 2 00:43:48.626673 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] Starting session document processing engine... Jul 2 00:43:48.720928 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 2 00:43:48.815283 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 2 00:43:48.914480 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0a45af23dc34dccda, requestId: ab67257a-e019-4aa9-bfc1-8b04f0195675 Jul 2 00:43:49.009131 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [OfflineService] Starting document processing engine... Jul 2 00:43:49.104154 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [OfflineService] [EngineProcessor] Starting Jul 2 00:43:49.199380 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [OfflineService] [EngineProcessor] Initial processing Jul 2 00:43:49.267168 tar[1843]: linux-arm64/LICENSE Jul 2 00:43:49.268018 tar[1843]: linux-arm64/README.md Jul 2 00:43:49.285667 systemd[1]: Finished prepare-helm.service. Jul 2 00:43:49.294686 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [OfflineService] Starting message polling Jul 2 00:43:49.390128 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [OfflineService] Starting send replies to MDS Jul 2 00:43:49.485967 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 2 00:43:49.514467 locksmithd[1916]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:43:49.581842 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 2 00:43:49.677908 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] Starting document processing engine... Jul 2 00:43:49.732149 systemd[1]: Started kubelet.service. Jul 2 00:43:49.774357 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 2 00:43:49.870828 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 2 00:43:49.967654 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] Starting message polling Jul 2 00:43:50.064578 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 2 00:43:50.161638 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [instanceID=i-0a45af23dc34dccda] Starting association polling Jul 2 00:43:50.258863 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 2 00:43:50.356484 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 2 00:43:50.454100 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 2 00:43:50.552055 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 2 00:43:50.586284 kubelet[2054]: E0702 00:43:50.586161 2054 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:43:50.590156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:43:50.590582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:43:50.650930 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 2 00:43:50.751294 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [HealthCheck] HealthCheck reporting agent health. Jul 2 00:43:50.850141 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] listening reply. Jul 2 00:43:50.951122 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 2 00:43:51.052122 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [StartupProcessor] Executing startup processor tasks Jul 2 00:43:51.151327 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 2 00:43:51.252667 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 2 00:43:51.352680 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.5 Jul 2 00:43:51.453204 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a45af23dc34dccda?role=subscribe&stream=input Jul 2 00:43:51.555167 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0a45af23dc34dccda?role=subscribe&stream=input Jul 2 00:43:51.656188 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] Starting receiving message from control channel Jul 2 00:43:51.757440 amazon-ssm-agent[1820]: 2024-07-02 00:43:48 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 2 00:43:53.563019 sshd_keygen[1871]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:43:53.602661 systemd[1]: Finished sshd-keygen.service. Jul 2 00:43:53.608622 systemd[1]: Starting issuegen.service... Jul 2 00:43:53.622202 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:43:53.622821 systemd[1]: Finished issuegen.service. Jul 2 00:43:53.628033 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:43:53.643303 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:43:53.648871 systemd[1]: Started getty@tty1.service. Jul 2 00:43:53.653723 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 00:43:53.656174 systemd[1]: Reached target getty.target. Jul 2 00:43:53.658320 systemd[1]: Reached target multi-user.target. Jul 2 00:43:53.663289 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:43:53.679656 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:43:53.680504 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:43:53.686047 systemd[1]: Startup finished in 10.143s (kernel) + 15.137s (userspace) = 25.280s. Jul 2 00:43:55.413813 systemd[1]: Created slice system-sshd.slice. Jul 2 00:43:55.416647 systemd[1]: Started sshd@0-172.31.19.36:22-139.178.89.65:41118.service. Jul 2 00:43:55.623840 sshd[2080]: Accepted publickey for core from 139.178.89.65 port 41118 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:55.628586 sshd[2080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:55.649817 systemd[1]: Created slice user-500.slice. Jul 2 00:43:55.652489 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:43:55.658646 systemd-logind[1838]: New session 1 of user core. Jul 2 00:43:55.672332 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:43:55.678112 systemd[1]: Starting user@500.service... Jul 2 00:43:55.685488 (systemd)[2085]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:55.886809 systemd[2085]: Queued start job for default target default.target. Jul 2 00:43:55.887957 systemd[2085]: Reached target paths.target. Jul 2 00:43:55.888198 systemd[2085]: Reached target sockets.target. Jul 2 00:43:55.888565 systemd[2085]: Reached target timers.target. Jul 2 00:43:55.888810 systemd[2085]: Reached target basic.target. Jul 2 00:43:55.889081 systemd[2085]: Reached target default.target. Jul 2 00:43:55.889231 systemd[1]: Started user@500.service. Jul 2 00:43:55.889479 systemd[2085]: Startup finished in 190ms. Jul 2 00:43:55.891658 systemd[1]: Started session-1.scope. Jul 2 00:43:56.039681 systemd[1]: Started sshd@1-172.31.19.36:22-139.178.89.65:41120.service. Jul 2 00:43:56.215638 sshd[2094]: Accepted publickey for core from 139.178.89.65 port 41120 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:56.218181 sshd[2094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:56.226117 systemd-logind[1838]: New session 2 of user core. Jul 2 00:43:56.228094 systemd[1]: Started session-2.scope. Jul 2 00:43:56.362489 sshd[2094]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:56.369626 systemd[1]: sshd@1-172.31.19.36:22-139.178.89.65:41120.service: Deactivated successfully. Jul 2 00:43:56.371320 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:43:56.372817 systemd-logind[1838]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:43:56.376762 systemd-logind[1838]: Removed session 2. Jul 2 00:43:56.388797 systemd[1]: Started sshd@2-172.31.19.36:22-139.178.89.65:41132.service. Jul 2 00:43:56.565029 sshd[2101]: Accepted publickey for core from 139.178.89.65 port 41132 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:56.568542 sshd[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:56.577842 systemd[1]: Started session-3.scope. Jul 2 00:43:56.578933 systemd-logind[1838]: New session 3 of user core. Jul 2 00:43:56.710773 sshd[2101]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:56.717034 systemd-logind[1838]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:43:56.719710 systemd[1]: sshd@2-172.31.19.36:22-139.178.89.65:41132.service: Deactivated successfully. Jul 2 00:43:56.721333 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:43:56.723297 systemd-logind[1838]: Removed session 3. Jul 2 00:43:56.735488 systemd[1]: Started sshd@3-172.31.19.36:22-139.178.89.65:41140.service. Jul 2 00:43:56.906905 sshd[2108]: Accepted publickey for core from 139.178.89.65 port 41140 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:56.909528 sshd[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:56.918589 systemd-logind[1838]: New session 4 of user core. Jul 2 00:43:56.918957 systemd[1]: Started session-4.scope. Jul 2 00:43:57.053109 sshd[2108]: pam_unix(sshd:session): session closed for user core Jul 2 00:43:57.059518 systemd[1]: sshd@3-172.31.19.36:22-139.178.89.65:41140.service: Deactivated successfully. Jul 2 00:43:57.061892 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:43:57.062827 systemd-logind[1838]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:43:57.065089 systemd-logind[1838]: Removed session 4. Jul 2 00:43:57.079263 systemd[1]: Started sshd@4-172.31.19.36:22-139.178.89.65:41152.service. Jul 2 00:43:57.249896 sshd[2115]: Accepted publickey for core from 139.178.89.65 port 41152 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:43:57.253339 sshd[2115]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:43:57.262382 systemd-logind[1838]: New session 5 of user core. Jul 2 00:43:57.263653 systemd[1]: Started session-5.scope. Jul 2 00:43:57.399973 sudo[2119]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:43:57.401155 sudo[2119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:43:57.450780 systemd[1]: Starting docker.service... Jul 2 00:43:57.531562 env[2129]: time="2024-07-02T00:43:57.531493015Z" level=info msg="Starting up" Jul 2 00:43:57.534673 env[2129]: time="2024-07-02T00:43:57.534604246Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:43:57.534673 env[2129]: time="2024-07-02T00:43:57.534658866Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:43:57.534915 env[2129]: time="2024-07-02T00:43:57.534706717Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:43:57.534915 env[2129]: time="2024-07-02T00:43:57.534731714Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:43:57.538203 env[2129]: time="2024-07-02T00:43:57.538160507Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:43:57.538438 env[2129]: time="2024-07-02T00:43:57.538389732Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:43:57.538577 env[2129]: time="2024-07-02T00:43:57.538539702Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:43:57.538719 env[2129]: time="2024-07-02T00:43:57.538691078Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:43:58.247109 env[2129]: time="2024-07-02T00:43:58.247059839Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 00:43:58.247445 env[2129]: time="2024-07-02T00:43:58.247382578Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 00:43:58.248259 env[2129]: time="2024-07-02T00:43:58.247918456Z" level=info msg="Loading containers: start." Jul 2 00:43:58.446454 kernel: Initializing XFRM netlink socket Jul 2 00:43:58.491392 env[2129]: time="2024-07-02T00:43:58.491345830Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 00:43:58.493321 (udev-worker)[2140]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:43:58.598364 systemd-networkd[1518]: docker0: Link UP Jul 2 00:43:58.620144 env[2129]: time="2024-07-02T00:43:58.620093691Z" level=info msg="Loading containers: done." Jul 2 00:43:58.645499 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1870955343-merged.mount: Deactivated successfully. Jul 2 00:43:58.659922 env[2129]: time="2024-07-02T00:43:58.659808060Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:43:58.660460 env[2129]: time="2024-07-02T00:43:58.660383690Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 00:43:58.660758 env[2129]: time="2024-07-02T00:43:58.660711147Z" level=info msg="Daemon has completed initialization" Jul 2 00:43:58.688473 systemd[1]: Started docker.service. Jul 2 00:43:58.702387 env[2129]: time="2024-07-02T00:43:58.702277368Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:44:00.025956 env[1854]: time="2024-07-02T00:44:00.025872656Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 00:44:00.620351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:44:00.620671 systemd[1]: Stopped kubelet.service. Jul 2 00:44:00.624559 systemd[1]: Starting kubelet.service... Jul 2 00:44:00.671610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount772093558.mount: Deactivated successfully. Jul 2 00:44:01.122289 systemd[1]: Started kubelet.service. Jul 2 00:44:01.341194 kubelet[2267]: E0702 00:44:01.341121 2267 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:01.351146 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:01.351608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:03.184756 env[1854]: time="2024-07-02T00:44:03.184668528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:03.188552 env[1854]: time="2024-07-02T00:44:03.188489946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:03.192087 env[1854]: time="2024-07-02T00:44:03.192029796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:03.196177 env[1854]: time="2024-07-02T00:44:03.196115744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:03.197931 env[1854]: time="2024-07-02T00:44:03.197878427Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jul 2 00:44:03.214785 env[1854]: time="2024-07-02T00:44:03.214723767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 00:44:06.254225 env[1854]: time="2024-07-02T00:44:06.254144005Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:06.258478 env[1854]: time="2024-07-02T00:44:06.258428451Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:06.264190 env[1854]: time="2024-07-02T00:44:06.264140633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:06.267606 env[1854]: time="2024-07-02T00:44:06.267504959Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:06.269836 env[1854]: time="2024-07-02T00:44:06.269776126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jul 2 00:44:06.288008 env[1854]: time="2024-07-02T00:44:06.287952186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 00:44:07.174995 amazon-ssm-agent[1820]: 2024-07-02 00:44:07 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 2 00:44:08.436448 env[1854]: time="2024-07-02T00:44:08.436360130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.440509 env[1854]: time="2024-07-02T00:44:08.440435878Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.444139 env[1854]: time="2024-07-02T00:44:08.444071875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.446049 env[1854]: time="2024-07-02T00:44:08.445974257Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jul 2 00:44:08.451918 env[1854]: time="2024-07-02T00:44:08.449773514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:08.464538 env[1854]: time="2024-07-02T00:44:08.464485753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:44:10.249129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3304170381.mount: Deactivated successfully. Jul 2 00:44:11.077237 env[1854]: time="2024-07-02T00:44:11.077173220Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.080493 env[1854]: time="2024-07-02T00:44:11.080438499Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.083587 env[1854]: time="2024-07-02T00:44:11.083506012Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.086285 env[1854]: time="2024-07-02T00:44:11.086231939Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.087352 env[1854]: time="2024-07-02T00:44:11.087276983Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jul 2 00:44:11.103791 env[1854]: time="2024-07-02T00:44:11.103740878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:44:11.585485 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:44:11.585753 systemd[1]: Stopped kubelet.service. Jul 2 00:44:11.588516 systemd[1]: Starting kubelet.service... Jul 2 00:44:11.609872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904224645.mount: Deactivated successfully. Jul 2 00:44:11.646571 env[1854]: time="2024-07-02T00:44:11.646495515Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.653027 env[1854]: time="2024-07-02T00:44:11.652947769Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.657822 env[1854]: time="2024-07-02T00:44:11.657750434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.677919 env[1854]: time="2024-07-02T00:44:11.677852607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:11.680356 env[1854]: time="2024-07-02T00:44:11.680261835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:44:11.700902 env[1854]: time="2024-07-02T00:44:11.700839189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 00:44:11.976693 systemd[1]: Started kubelet.service. Jul 2 00:44:12.089334 kubelet[2308]: E0702 00:44:12.089239 2308 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:12.097028 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:12.097472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:12.320785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898719645.mount: Deactivated successfully. Jul 2 00:44:17.388315 env[1854]: time="2024-07-02T00:44:17.388243787Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:17.393571 env[1854]: time="2024-07-02T00:44:17.393500666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:17.399237 env[1854]: time="2024-07-02T00:44:17.399151709Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:17.404307 env[1854]: time="2024-07-02T00:44:17.404234861Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:17.406419 env[1854]: time="2024-07-02T00:44:17.406323225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jul 2 00:44:17.425036 env[1854]: time="2024-07-02T00:44:17.424968886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 00:44:17.985358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4254824299.mount: Deactivated successfully. Jul 2 00:44:18.480307 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:44:18.936634 env[1854]: time="2024-07-02T00:44:18.936574700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.940875 env[1854]: time="2024-07-02T00:44:18.940823387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.943696 env[1854]: time="2024-07-02T00:44:18.943627064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.946686 env[1854]: time="2024-07-02T00:44:18.946620748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:18.948042 env[1854]: time="2024-07-02T00:44:18.947977712Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jul 2 00:44:22.345259 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:44:22.345648 systemd[1]: Stopped kubelet.service. Jul 2 00:44:22.349252 systemd[1]: Starting kubelet.service... Jul 2 00:44:22.761039 systemd[1]: Started kubelet.service. Jul 2 00:44:22.876455 kubelet[2392]: E0702 00:44:22.875741 2392 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:44:22.880340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:44:22.880772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:44:25.495863 systemd[1]: Stopped kubelet.service. Jul 2 00:44:25.502720 systemd[1]: Starting kubelet.service... Jul 2 00:44:25.541957 systemd[1]: Reloading. Jul 2 00:44:25.679449 /usr/lib/systemd/system-generators/torcx-generator[2426]: time="2024-07-02T00:44:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:44:25.680156 /usr/lib/systemd/system-generators/torcx-generator[2426]: time="2024-07-02T00:44:25Z" level=info msg="torcx already run" Jul 2 00:44:25.930649 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:44:25.930913 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:44:25.975709 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:44:26.175435 systemd[1]: Started kubelet.service. Jul 2 00:44:26.179765 systemd[1]: Stopping kubelet.service... Jul 2 00:44:26.183151 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:44:26.184076 systemd[1]: Stopped kubelet.service. Jul 2 00:44:26.187856 systemd[1]: Starting kubelet.service... Jul 2 00:44:26.645847 systemd[1]: Started kubelet.service. Jul 2 00:44:26.741452 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:26.742093 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:44:26.742214 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:26.742512 kubelet[2504]: I0702 00:44:26.742394 2504 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:44:28.762636 kubelet[2504]: I0702 00:44:28.762577 2504 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:44:28.762636 kubelet[2504]: I0702 00:44:28.762627 2504 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:44:28.763325 kubelet[2504]: I0702 00:44:28.762980 2504 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:44:28.807155 kubelet[2504]: I0702 00:44:28.807116 2504 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:28.809075 kubelet[2504]: E0702 00:44:28.809021 2504 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.837866 kubelet[2504]: W0702 00:44:28.837787 2504 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:44:28.839273 kubelet[2504]: I0702 00:44:28.839225 2504 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:44:28.840188 kubelet[2504]: I0702 00:44:28.840138 2504 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:44:28.840598 kubelet[2504]: I0702 00:44:28.840556 2504 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:44:28.840830 kubelet[2504]: I0702 00:44:28.840623 2504 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:44:28.840830 kubelet[2504]: I0702 00:44:28.840647 2504 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:44:28.840998 kubelet[2504]: I0702 00:44:28.840896 2504 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:28.845773 kubelet[2504]: I0702 00:44:28.845704 2504 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:44:28.845773 kubelet[2504]: I0702 00:44:28.845768 2504 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:44:28.846026 kubelet[2504]: I0702 00:44:28.845851 2504 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:44:28.846026 kubelet[2504]: I0702 00:44:28.845882 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:44:28.848801 kubelet[2504]: I0702 00:44:28.848726 2504 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:44:28.851390 kubelet[2504]: W0702 00:44:28.851326 2504 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:44:28.852374 kubelet[2504]: I0702 00:44:28.852316 2504 server.go:1232] "Started kubelet" Jul 2 00:44:28.852628 kubelet[2504]: W0702 00:44:28.852544 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.852710 kubelet[2504]: E0702 00:44:28.852644 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.852843 kubelet[2504]: W0702 00:44:28.852764 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-36&limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.852929 kubelet[2504]: E0702 00:44:28.852862 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-36&limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.867314 kubelet[2504]: E0702 00:44:28.867065 2504 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-19-36.17de3eb844dbfd35", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-19-36", UID:"ip-172-31-19-36", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-19-36"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 44, 28, 852280629, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 44, 28, 852280629, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-19-36"}': 'Post "https://172.31.19.36:6443/api/v1/namespaces/default/events": dial tcp 172.31.19.36:6443: connect: connection refused'(may retry after sleeping) Jul 2 00:44:28.867861 kubelet[2504]: E0702 00:44:28.867825 2504 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:44:28.868039 kubelet[2504]: E0702 00:44:28.868014 2504 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:44:28.870334 kubelet[2504]: I0702 00:44:28.870289 2504 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:44:28.871971 kubelet[2504]: I0702 00:44:28.871931 2504 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:44:28.872139 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:44:28.872887 kubelet[2504]: I0702 00:44:28.872832 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:44:28.874686 kubelet[2504]: I0702 00:44:28.874645 2504 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:44:28.875207 kubelet[2504]: I0702 00:44:28.875177 2504 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:44:28.880555 kubelet[2504]: I0702 00:44:28.880492 2504 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:44:28.881148 kubelet[2504]: I0702 00:44:28.881104 2504 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:44:28.881538 kubelet[2504]: I0702 00:44:28.881501 2504 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:44:28.882451 kubelet[2504]: W0702 00:44:28.882335 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.882720 kubelet[2504]: E0702 00:44:28.882688 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.884842 kubelet[2504]: E0702 00:44:28.884765 2504 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": dial tcp 172.31.19.36:6443: connect: connection refused" interval="200ms" Jul 2 00:44:28.940218 kubelet[2504]: I0702 00:44:28.940175 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:44:28.942421 kubelet[2504]: I0702 00:44:28.942367 2504 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:44:28.942630 kubelet[2504]: I0702 00:44:28.942607 2504 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:44:28.942754 kubelet[2504]: I0702 00:44:28.942734 2504 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:44:28.942956 kubelet[2504]: E0702 00:44:28.942936 2504 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:44:28.956348 kubelet[2504]: W0702 00:44:28.956253 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.956348 kubelet[2504]: E0702 00:44:28.956351 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:28.984493 kubelet[2504]: I0702 00:44:28.984458 2504 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:28.985359 kubelet[2504]: E0702 00:44:28.985333 2504 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.36:6443/api/v1/nodes\": dial tcp 172.31.19.36:6443: connect: connection refused" node="ip-172-31-19-36" Jul 2 00:44:28.990306 kubelet[2504]: I0702 00:44:28.990267 2504 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:44:28.990306 kubelet[2504]: I0702 00:44:28.990302 2504 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:44:28.990542 kubelet[2504]: I0702 00:44:28.990337 2504 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:28.997568 kubelet[2504]: I0702 00:44:28.997512 2504 policy_none.go:49] "None policy: Start" Jul 2 00:44:28.999057 kubelet[2504]: I0702 00:44:28.998986 2504 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:44:28.999183 kubelet[2504]: I0702 00:44:28.999065 2504 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:44:29.008702 kubelet[2504]: I0702 00:44:29.008650 2504 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:44:29.009061 kubelet[2504]: I0702 00:44:29.009023 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:44:29.018745 kubelet[2504]: E0702 00:44:29.017964 2504 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-36\" not found" Jul 2 00:44:29.044040 kubelet[2504]: I0702 00:44:29.043981 2504 topology_manager.go:215] "Topology Admit Handler" podUID="09df41fd2ed7a44935e12cd902b03ee9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.046717 kubelet[2504]: I0702 00:44:29.046684 2504 topology_manager.go:215] "Topology Admit Handler" podUID="f835d5d21d004311bf93e329aaec5e8c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-36" Jul 2 00:44:29.049945 kubelet[2504]: I0702 00:44:29.049897 2504 topology_manager.go:215] "Topology Admit Handler" podUID="2dc5b5dd38c3481a48b5770b5b298637" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-36" Jul 2 00:44:29.083882 kubelet[2504]: I0702 00:44:29.083843 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.084107 kubelet[2504]: I0702 00:44:29.084084 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.084252 kubelet[2504]: I0702 00:44:29.084230 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.084423 kubelet[2504]: I0702 00:44:29.084374 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.084593 kubelet[2504]: I0702 00:44:29.084572 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f835d5d21d004311bf93e329aaec5e8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-36\" (UID: \"f835d5d21d004311bf93e329aaec5e8c\") " pod="kube-system/kube-scheduler-ip-172-31-19-36" Jul 2 00:44:29.084741 kubelet[2504]: I0702 00:44:29.084720 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:29.084900 kubelet[2504]: I0702 00:44:29.084879 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:29.085043 kubelet[2504]: I0702 00:44:29.085022 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-ca-certs\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:29.085188 kubelet[2504]: I0702 00:44:29.085166 2504 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:29.085810 kubelet[2504]: E0702 00:44:29.085784 2504 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": dial tcp 172.31.19.36:6443: connect: connection refused" interval="400ms" Jul 2 00:44:29.187949 kubelet[2504]: I0702 00:44:29.187905 2504 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:29.188450 kubelet[2504]: E0702 00:44:29.188379 2504 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.36:6443/api/v1/nodes\": dial tcp 172.31.19.36:6443: connect: connection refused" node="ip-172-31-19-36" Jul 2 00:44:29.360089 env[1854]: time="2024-07-02T00:44:29.359614075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-36,Uid:09df41fd2ed7a44935e12cd902b03ee9,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:29.363531 env[1854]: time="2024-07-02T00:44:29.363468953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-36,Uid:f835d5d21d004311bf93e329aaec5e8c,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:29.367307 env[1854]: time="2024-07-02T00:44:29.367222550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-36,Uid:2dc5b5dd38c3481a48b5770b5b298637,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:29.487163 kubelet[2504]: E0702 00:44:29.487102 2504 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": dial tcp 172.31.19.36:6443: connect: connection refused" interval="800ms" Jul 2 00:44:29.591127 kubelet[2504]: I0702 00:44:29.591095 2504 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:29.591792 kubelet[2504]: E0702 00:44:29.591766 2504 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.36:6443/api/v1/nodes\": dial tcp 172.31.19.36:6443: connect: connection refused" node="ip-172-31-19-36" Jul 2 00:44:29.871207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382722421.mount: Deactivated successfully. Jul 2 00:44:29.883941 env[1854]: time="2024-07-02T00:44:29.883852404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.885853 env[1854]: time="2024-07-02T00:44:29.885785539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.892861 env[1854]: time="2024-07-02T00:44:29.892798861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.894050 kubelet[2504]: W0702 00:44:29.893943 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.19.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:29.894712 kubelet[2504]: E0702 00:44:29.894058 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:29.896151 env[1854]: time="2024-07-02T00:44:29.896090555Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.898137 env[1854]: time="2024-07-02T00:44:29.898076605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.900686 env[1854]: time="2024-07-02T00:44:29.900614100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.907033 env[1854]: time="2024-07-02T00:44:29.906973629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.913757 env[1854]: time="2024-07-02T00:44:29.913665427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.916158 env[1854]: time="2024-07-02T00:44:29.916088040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.917898 env[1854]: time="2024-07-02T00:44:29.917833451Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.920247 env[1854]: time="2024-07-02T00:44:29.920184506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.924899 env[1854]: time="2024-07-02T00:44:29.924842284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:29.999091 env[1854]: time="2024-07-02T00:44:29.998971910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:29.999441 env[1854]: time="2024-07-02T00:44:29.999351644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:29.999684 env[1854]: time="2024-07-02T00:44:29.999621994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.000640 env[1854]: time="2024-07-02T00:44:30.000546652Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d6faf1265742dd60c16cfbe356d0220fb9ac81f958c06adf90438fa165b8cfc pid=2550 runtime=io.containerd.runc.v2 Jul 2 00:44:30.004498 env[1854]: time="2024-07-02T00:44:30.004298127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.004498 env[1854]: time="2024-07-02T00:44:30.004437711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.004777 env[1854]: time="2024-07-02T00:44:30.004482739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.007424 env[1854]: time="2024-07-02T00:44:30.007274614Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c pid=2546 runtime=io.containerd.runc.v2 Jul 2 00:44:30.021885 env[1854]: time="2024-07-02T00:44:30.021765067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:30.022197 env[1854]: time="2024-07-02T00:44:30.022103584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:30.022478 env[1854]: time="2024-07-02T00:44:30.022377567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:30.023654 env[1854]: time="2024-07-02T00:44:30.023553313Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa pid=2581 runtime=io.containerd.runc.v2 Jul 2 00:44:30.194995 env[1854]: time="2024-07-02T00:44:30.193024900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-36,Uid:f835d5d21d004311bf93e329aaec5e8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c\"" Jul 2 00:44:30.202708 env[1854]: time="2024-07-02T00:44:30.202652571Z" level=info msg="CreateContainer within sandbox \"c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:44:30.208497 env[1854]: time="2024-07-02T00:44:30.208441317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-36,Uid:09df41fd2ed7a44935e12cd902b03ee9,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa\"" Jul 2 00:44:30.215971 env[1854]: time="2024-07-02T00:44:30.215904737Z" level=info msg="CreateContainer within sandbox \"b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:44:30.227969 env[1854]: time="2024-07-02T00:44:30.226392372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-36,Uid:2dc5b5dd38c3481a48b5770b5b298637,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6faf1265742dd60c16cfbe356d0220fb9ac81f958c06adf90438fa165b8cfc\"" Jul 2 00:44:30.238070 env[1854]: time="2024-07-02T00:44:30.238014220Z" level=info msg="CreateContainer within sandbox \"6d6faf1265742dd60c16cfbe356d0220fb9ac81f958c06adf90438fa165b8cfc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:44:30.264581 env[1854]: time="2024-07-02T00:44:30.264490630Z" level=info msg="CreateContainer within sandbox \"c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc\"" Jul 2 00:44:30.265732 env[1854]: time="2024-07-02T00:44:30.265669533Z" level=info msg="StartContainer for \"12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc\"" Jul 2 00:44:30.271865 env[1854]: time="2024-07-02T00:44:30.271799412Z" level=info msg="CreateContainer within sandbox \"b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e\"" Jul 2 00:44:30.272996 env[1854]: time="2024-07-02T00:44:30.272920923Z" level=info msg="StartContainer for \"cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e\"" Jul 2 00:44:30.278510 kubelet[2504]: W0702 00:44:30.278429 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.19.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.278674 kubelet[2504]: E0702 00:44:30.278552 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.36:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.288289 kubelet[2504]: E0702 00:44:30.288228 2504 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": dial tcp 172.31.19.36:6443: connect: connection refused" interval="1.6s" Jul 2 00:44:30.297603 env[1854]: time="2024-07-02T00:44:30.297534542Z" level=info msg="CreateContainer within sandbox \"6d6faf1265742dd60c16cfbe356d0220fb9ac81f958c06adf90438fa165b8cfc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3a5c303bb698f88a8a18408db2112d2e44be56d1a310118b3a389acfe9fab340\"" Jul 2 00:44:30.298640 env[1854]: time="2024-07-02T00:44:30.298590811Z" level=info msg="StartContainer for \"3a5c303bb698f88a8a18408db2112d2e44be56d1a310118b3a389acfe9fab340\"" Jul 2 00:44:30.396665 kubelet[2504]: W0702 00:44:30.396551 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.19.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.396815 kubelet[2504]: E0702 00:44:30.396701 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.402456 kubelet[2504]: I0702 00:44:30.398172 2504 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:30.402456 kubelet[2504]: E0702 00:44:30.398951 2504 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.19.36:6443/api/v1/nodes\": dial tcp 172.31.19.36:6443: connect: connection refused" node="ip-172-31-19-36" Jul 2 00:44:30.438465 kubelet[2504]: W0702 00:44:30.438361 2504 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.19.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-36&limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.438615 kubelet[2504]: E0702 00:44:30.438478 2504 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-36&limit=500&resourceVersion=0": dial tcp 172.31.19.36:6443: connect: connection refused Jul 2 00:44:30.512172 env[1854]: time="2024-07-02T00:44:30.512041125Z" level=info msg="StartContainer for \"cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e\" returns successfully" Jul 2 00:44:30.522340 env[1854]: time="2024-07-02T00:44:30.522264610Z" level=info msg="StartContainer for \"12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc\" returns successfully" Jul 2 00:44:30.571744 env[1854]: time="2024-07-02T00:44:30.571671550Z" level=info msg="StartContainer for \"3a5c303bb698f88a8a18408db2112d2e44be56d1a310118b3a389acfe9fab340\" returns successfully" Jul 2 00:44:32.002106 kubelet[2504]: I0702 00:44:32.002044 2504 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:33.421290 update_engine[1839]: I0702 00:44:33.420462 1839 update_attempter.cc:509] Updating boot flags... Jul 2 00:44:35.180033 kubelet[2504]: I0702 00:44:35.179968 2504 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-36" Jul 2 00:44:35.341706 kubelet[2504]: E0702 00:44:35.341654 2504 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 00:44:35.346925 kubelet[2504]: E0702 00:44:35.346865 2504 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-36\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:35.852736 kubelet[2504]: I0702 00:44:35.852680 2504 apiserver.go:52] "Watching apiserver" Jul 2 00:44:35.882108 kubelet[2504]: I0702 00:44:35.882066 2504 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:44:37.202336 amazon-ssm-agent[1820]: 2024-07-02 00:44:37 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 2 00:44:37.826975 systemd[1]: Reloading. Jul 2 00:44:37.948721 /usr/lib/systemd/system-generators/torcx-generator[2888]: time="2024-07-02T00:44:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:44:37.959972 /usr/lib/systemd/system-generators/torcx-generator[2888]: time="2024-07-02T00:44:37Z" level=info msg="torcx already run" Jul 2 00:44:38.192842 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:44:38.192896 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:44:38.271643 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:44:38.573248 systemd[1]: Stopping kubelet.service... Jul 2 00:44:38.574200 kubelet[2504]: I0702 00:44:38.573763 2504 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:38.591918 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:44:38.592723 systemd[1]: Stopped kubelet.service. Jul 2 00:44:38.596774 systemd[1]: Starting kubelet.service... Jul 2 00:44:38.930898 systemd[1]: Started kubelet.service. Jul 2 00:44:39.096872 sudo[2969]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:44:39.098473 sudo[2969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:44:39.140322 kubelet[2958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:39.140322 kubelet[2958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:44:39.140322 kubelet[2958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:44:39.141088 kubelet[2958]: I0702 00:44:39.140475 2958 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:44:39.158373 kubelet[2958]: I0702 00:44:39.158283 2958 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:44:39.158663 kubelet[2958]: I0702 00:44:39.158628 2958 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:44:39.170193 kubelet[2958]: I0702 00:44:39.170127 2958 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:44:39.190072 kubelet[2958]: I0702 00:44:39.187221 2958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:44:39.192295 kubelet[2958]: I0702 00:44:39.192232 2958 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:44:39.209367 kubelet[2958]: W0702 00:44:39.209314 2958 machine.go:65] Cannot read vendor id correctly, set empty. Jul 2 00:44:39.215018 kubelet[2958]: I0702 00:44:39.214954 2958 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:44:39.219153 kubelet[2958]: I0702 00:44:39.219104 2958 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:44:39.219823 kubelet[2958]: I0702 00:44:39.219740 2958 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:44:39.220181 kubelet[2958]: I0702 00:44:39.220143 2958 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:44:39.220369 kubelet[2958]: I0702 00:44:39.220338 2958 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:44:39.220649 kubelet[2958]: I0702 00:44:39.220622 2958 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:39.220973 kubelet[2958]: I0702 00:44:39.220943 2958 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:44:39.221189 kubelet[2958]: I0702 00:44:39.221158 2958 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:44:39.221444 kubelet[2958]: I0702 00:44:39.221372 2958 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:44:39.221808 kubelet[2958]: I0702 00:44:39.221772 2958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:44:39.253912 kubelet[2958]: I0702 00:44:39.247391 2958 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:44:39.253912 kubelet[2958]: I0702 00:44:39.248378 2958 server.go:1232] "Started kubelet" Jul 2 00:44:39.264004 kubelet[2958]: I0702 00:44:39.263952 2958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:44:39.268649 kubelet[2958]: I0702 00:44:39.268576 2958 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:44:39.270105 kubelet[2958]: I0702 00:44:39.270045 2958 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:44:39.271983 kubelet[2958]: I0702 00:44:39.271937 2958 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:44:39.272294 kubelet[2958]: I0702 00:44:39.272259 2958 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:44:39.290265 kubelet[2958]: I0702 00:44:39.288646 2958 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:44:39.290265 kubelet[2958]: I0702 00:44:39.289243 2958 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:44:39.291742 kubelet[2958]: I0702 00:44:39.291704 2958 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:44:39.293253 kubelet[2958]: E0702 00:44:39.293212 2958 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:44:39.293642 kubelet[2958]: E0702 00:44:39.293603 2958 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:44:39.382598 kubelet[2958]: I0702 00:44:39.382543 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:44:39.395932 kubelet[2958]: I0702 00:44:39.395894 2958 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-19-36" Jul 2 00:44:39.420275 kubelet[2958]: I0702 00:44:39.417001 2958 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-19-36" Jul 2 00:44:39.420275 kubelet[2958]: I0702 00:44:39.417200 2958 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-19-36" Jul 2 00:44:39.421076 kubelet[2958]: I0702 00:44:39.421027 2958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:44:39.421076 kubelet[2958]: I0702 00:44:39.421074 2958 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:44:39.421340 kubelet[2958]: I0702 00:44:39.421106 2958 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:44:39.421340 kubelet[2958]: E0702 00:44:39.421200 2958 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:44:39.531891 kubelet[2958]: E0702 00:44:39.530177 2958 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:44:39.633082 kubelet[2958]: I0702 00:44:39.633043 2958 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:44:39.633317 kubelet[2958]: I0702 00:44:39.633292 2958 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:44:39.633472 kubelet[2958]: I0702 00:44:39.633452 2958 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:44:39.633831 kubelet[2958]: I0702 00:44:39.633803 2958 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:44:39.634013 kubelet[2958]: I0702 00:44:39.633991 2958 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:44:39.634129 kubelet[2958]: I0702 00:44:39.634108 2958 policy_none.go:49] "None policy: Start" Jul 2 00:44:39.635892 kubelet[2958]: I0702 00:44:39.635855 2958 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:44:39.636109 kubelet[2958]: I0702 00:44:39.636084 2958 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:44:39.636685 kubelet[2958]: I0702 00:44:39.636650 2958 state_mem.go:75] "Updated machine memory state" Jul 2 00:44:39.640156 kubelet[2958]: I0702 00:44:39.640113 2958 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:44:39.651172 kubelet[2958]: I0702 00:44:39.651128 2958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:44:39.733196 kubelet[2958]: I0702 00:44:39.733152 2958 topology_manager.go:215] "Topology Admit Handler" podUID="2dc5b5dd38c3481a48b5770b5b298637" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-36" Jul 2 00:44:39.733592 kubelet[2958]: I0702 00:44:39.733558 2958 topology_manager.go:215] "Topology Admit Handler" podUID="09df41fd2ed7a44935e12cd902b03ee9" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.733855 kubelet[2958]: I0702 00:44:39.733825 2958 topology_manager.go:215] "Topology Admit Handler" podUID="f835d5d21d004311bf93e329aaec5e8c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-36" Jul 2 00:44:39.753500 kubelet[2958]: E0702 00:44:39.753449 2958 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-19-36\" already exists" pod="kube-system/kube-scheduler-ip-172-31-19-36" Jul 2 00:44:39.803556 kubelet[2958]: I0702 00:44:39.802615 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:39.803922 kubelet[2958]: I0702 00:44:39.803856 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.804269 kubelet[2958]: I0702 00:44:39.804220 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.804600 kubelet[2958]: I0702 00:44:39.804555 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.804829 kubelet[2958]: I0702 00:44:39.804785 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-ca-certs\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:39.805169 kubelet[2958]: I0702 00:44:39.805140 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dc5b5dd38c3481a48b5770b5b298637-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-36\" (UID: \"2dc5b5dd38c3481a48b5770b5b298637\") " pod="kube-system/kube-apiserver-ip-172-31-19-36" Jul 2 00:44:39.805440 kubelet[2958]: I0702 00:44:39.805366 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.805761 kubelet[2958]: I0702 00:44:39.805706 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09df41fd2ed7a44935e12cd902b03ee9-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-36\" (UID: \"09df41fd2ed7a44935e12cd902b03ee9\") " pod="kube-system/kube-controller-manager-ip-172-31-19-36" Jul 2 00:44:39.808366 kubelet[2958]: I0702 00:44:39.806032 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f835d5d21d004311bf93e329aaec5e8c-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-36\" (UID: \"f835d5d21d004311bf93e329aaec5e8c\") " pod="kube-system/kube-scheduler-ip-172-31-19-36" Jul 2 00:44:40.223899 kubelet[2958]: I0702 00:44:40.223829 2958 apiserver.go:52] "Watching apiserver" Jul 2 00:44:40.290430 kubelet[2958]: I0702 00:44:40.290317 2958 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:44:40.298386 sudo[2969]: pam_unix(sudo:session): session closed for user root Jul 2 00:44:40.604516 kubelet[2958]: I0702 00:44:40.604470 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-36" podStartSLOduration=2.60435163 podCreationTimestamp="2024-07-02 00:44:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:40.573652739 +0000 UTC m=+1.607494813" watchObservedRunningTime="2024-07-02 00:44:40.60435163 +0000 UTC m=+1.638193680" Jul 2 00:44:40.638432 kubelet[2958]: I0702 00:44:40.638352 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-36" podStartSLOduration=1.638292663 podCreationTimestamp="2024-07-02 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:40.605624321 +0000 UTC m=+1.639466419" watchObservedRunningTime="2024-07-02 00:44:40.638292663 +0000 UTC m=+1.672134725" Jul 2 00:44:40.668036 kubelet[2958]: I0702 00:44:40.667988 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-36" podStartSLOduration=1.667932654 podCreationTimestamp="2024-07-02 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:40.640185574 +0000 UTC m=+1.674027612" watchObservedRunningTime="2024-07-02 00:44:40.667932654 +0000 UTC m=+1.701774704" Jul 2 00:44:43.436760 sudo[2119]: pam_unix(sudo:session): session closed for user root Jul 2 00:44:43.462000 sshd[2115]: pam_unix(sshd:session): session closed for user core Jul 2 00:44:43.467325 systemd-logind[1838]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:44:43.468101 systemd[1]: sshd@4-172.31.19.36:22-139.178.89.65:41152.service: Deactivated successfully. Jul 2 00:44:43.470615 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:44:43.472563 systemd-logind[1838]: Removed session 5. Jul 2 00:44:51.401768 kubelet[2958]: I0702 00:44:51.401713 2958 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:44:51.403648 env[1854]: time="2024-07-02T00:44:51.403455381Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:44:51.404551 kubelet[2958]: I0702 00:44:51.404513 2958 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:44:51.851869 kubelet[2958]: I0702 00:44:51.851817 2958 topology_manager.go:215] "Topology Admit Handler" podUID="5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f" podNamespace="kube-system" podName="kube-proxy-895qs" Jul 2 00:44:51.872472 kubelet[2958]: I0702 00:44:51.869109 2958 topology_manager.go:215] "Topology Admit Handler" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" podNamespace="kube-system" podName="cilium-xmx8s" Jul 2 00:44:51.886490 kubelet[2958]: I0702 00:44:51.886432 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f-kube-proxy\") pod \"kube-proxy-895qs\" (UID: \"5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f\") " pod="kube-system/kube-proxy-895qs" Jul 2 00:44:51.889032 kubelet[2958]: I0702 00:44:51.888935 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f-xtables-lock\") pod \"kube-proxy-895qs\" (UID: \"5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f\") " pod="kube-system/kube-proxy-895qs" Jul 2 00:44:51.889266 kubelet[2958]: I0702 00:44:51.889069 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f-lib-modules\") pod \"kube-proxy-895qs\" (UID: \"5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f\") " pod="kube-system/kube-proxy-895qs" Jul 2 00:44:51.889266 kubelet[2958]: I0702 00:44:51.889163 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjd7\" (UniqueName: \"kubernetes.io/projected/5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f-kube-api-access-prjd7\") pod \"kube-proxy-895qs\" (UID: \"5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f\") " pod="kube-system/kube-proxy-895qs" Jul 2 00:44:51.889954 kubelet[2958]: W0702 00:44:51.887244 2958 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.890243 kubelet[2958]: E0702 00:44:51.890207 2958 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.890508 kubelet[2958]: W0702 00:44:51.888725 2958 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.895959 kubelet[2958]: W0702 00:44:51.888823 2958 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.896301 kubelet[2958]: E0702 00:44:51.896267 2958 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.896559 kubelet[2958]: E0702 00:44:51.896215 2958 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:44:51.989832 kubelet[2958]: I0702 00:44:51.989772 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hostproc\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990064 kubelet[2958]: I0702 00:44:51.989865 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-cgroup\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990064 kubelet[2958]: I0702 00:44:51.989939 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvqkl\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-kube-api-access-bvqkl\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990064 kubelet[2958]: I0702 00:44:51.989997 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-xtables-lock\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990280 kubelet[2958]: I0702 00:44:51.990073 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-bpf-maps\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990280 kubelet[2958]: I0702 00:44:51.990122 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cni-path\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990280 kubelet[2958]: I0702 00:44:51.990167 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-net\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990280 kubelet[2958]: I0702 00:44:51.990239 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-run\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990595 kubelet[2958]: I0702 00:44:51.990325 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-lib-modules\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990595 kubelet[2958]: I0702 00:44:51.990370 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990595 kubelet[2958]: I0702 00:44:51.990469 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990595 kubelet[2958]: I0702 00:44:51.990521 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990595 kubelet[2958]: I0702 00:44:51.990569 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-etc-cni-netd\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:51.990943 kubelet[2958]: I0702 00:44:51.990618 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-kernel\") pod \"cilium-xmx8s\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " pod="kube-system/cilium-xmx8s" Jul 2 00:44:52.165483 env[1854]: time="2024-07-02T00:44:52.165271615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-895qs,Uid:5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:52.175696 kubelet[2958]: I0702 00:44:52.175621 2958 topology_manager.go:215] "Topology Admit Handler" podUID="8d5b8788-11f1-488b-b8f0-997f2899d6f4" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-8n8g2" Jul 2 00:44:52.244450 env[1854]: time="2024-07-02T00:44:52.244228586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:52.244661 env[1854]: time="2024-07-02T00:44:52.244430021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:52.244661 env[1854]: time="2024-07-02T00:44:52.244514284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:52.246952 env[1854]: time="2024-07-02T00:44:52.244892995Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab3c0d8eb4bd26154e1a7045ea142fb801bc7f47dc9a90c81f8797e87541d07b pid=3041 runtime=io.containerd.runc.v2 Jul 2 00:44:52.308963 kubelet[2958]: I0702 00:44:52.308360 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d5b8788-11f1-488b-b8f0-997f2899d6f4-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-8n8g2\" (UID: \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\") " pod="kube-system/cilium-operator-6bc8ccdb58-8n8g2" Jul 2 00:44:52.308963 kubelet[2958]: I0702 00:44:52.308692 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lws5w\" (UniqueName: \"kubernetes.io/projected/8d5b8788-11f1-488b-b8f0-997f2899d6f4-kube-api-access-lws5w\") pod \"cilium-operator-6bc8ccdb58-8n8g2\" (UID: \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\") " pod="kube-system/cilium-operator-6bc8ccdb58-8n8g2" Jul 2 00:44:52.389860 env[1854]: time="2024-07-02T00:44:52.389796045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-895qs,Uid:5a4ffd9c-a3fa-436a-9919-64ad1fbe5f7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab3c0d8eb4bd26154e1a7045ea142fb801bc7f47dc9a90c81f8797e87541d07b\"" Jul 2 00:44:52.399243 env[1854]: time="2024-07-02T00:44:52.399150565Z" level=info msg="CreateContainer within sandbox \"ab3c0d8eb4bd26154e1a7045ea142fb801bc7f47dc9a90c81f8797e87541d07b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:44:52.436581 env[1854]: time="2024-07-02T00:44:52.435850072Z" level=info msg="CreateContainer within sandbox \"ab3c0d8eb4bd26154e1a7045ea142fb801bc7f47dc9a90c81f8797e87541d07b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16b4caa619d02d690c8c1fd6a0b3159fb7daea98049c636e5d0cbbcd73850091\"" Jul 2 00:44:52.438053 env[1854]: time="2024-07-02T00:44:52.437994964Z" level=info msg="StartContainer for \"16b4caa619d02d690c8c1fd6a0b3159fb7daea98049c636e5d0cbbcd73850091\"" Jul 2 00:44:52.573776 env[1854]: time="2024-07-02T00:44:52.573687368Z" level=info msg="StartContainer for \"16b4caa619d02d690c8c1fd6a0b3159fb7daea98049c636e5d0cbbcd73850091\" returns successfully" Jul 2 00:44:53.053944 systemd[1]: run-containerd-runc-k8s.io-ab3c0d8eb4bd26154e1a7045ea142fb801bc7f47dc9a90c81f8797e87541d07b-runc.M906u0.mount: Deactivated successfully. Jul 2 00:44:53.092686 kubelet[2958]: E0702 00:44:53.092624 2958 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:53.092686 kubelet[2958]: E0702 00:44:53.092677 2958 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xmx8s: failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:53.093513 kubelet[2958]: E0702 00:44:53.092787 2958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls podName:201a4794-0cd7-490b-a16f-9b5860bb7a3f nodeName:}" failed. No retries permitted until 2024-07-02 00:44:53.592752946 +0000 UTC m=+14.626594984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls") pod "cilium-xmx8s" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:53.093513 kubelet[2958]: E0702 00:44:53.092839 2958 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:44:53.093513 kubelet[2958]: E0702 00:44:53.092901 2958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path podName:201a4794-0cd7-490b-a16f-9b5860bb7a3f nodeName:}" failed. No retries permitted until 2024-07-02 00:44:53.592883811 +0000 UTC m=+14.626725849 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path") pod "cilium-xmx8s" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:44:53.093513 kubelet[2958]: E0702 00:44:53.093262 2958 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:53.093858 kubelet[2958]: E0702 00:44:53.093350 2958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets podName:201a4794-0cd7-490b-a16f-9b5860bb7a3f nodeName:}" failed. No retries permitted until 2024-07-02 00:44:53.593325769 +0000 UTC m=+14.627167807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets") pod "cilium-xmx8s" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f") : failed to sync secret cache: timed out waiting for the condition Jul 2 00:44:53.414352 env[1854]: time="2024-07-02T00:44:53.414207967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8n8g2,Uid:8d5b8788-11f1-488b-b8f0-997f2899d6f4,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:53.453536 env[1854]: time="2024-07-02T00:44:53.453346649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:53.454177 env[1854]: time="2024-07-02T00:44:53.453575651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:53.454177 env[1854]: time="2024-07-02T00:44:53.453663958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:53.454683 env[1854]: time="2024-07-02T00:44:53.454579929Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a pid=3240 runtime=io.containerd.runc.v2 Jul 2 00:44:53.507196 systemd[1]: run-containerd-runc-k8s.io-cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a-runc.WFmeMz.mount: Deactivated successfully. Jul 2 00:44:53.545788 kubelet[2958]: I0702 00:44:53.545369 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-895qs" podStartSLOduration=2.545314054 podCreationTimestamp="2024-07-02 00:44:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:44:53.545292344 +0000 UTC m=+14.579134406" watchObservedRunningTime="2024-07-02 00:44:53.545314054 +0000 UTC m=+14.579156104" Jul 2 00:44:53.593480 env[1854]: time="2024-07-02T00:44:53.593362619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-8n8g2,Uid:8d5b8788-11f1-488b-b8f0-997f2899d6f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\"" Jul 2 00:44:53.600653 env[1854]: time="2024-07-02T00:44:53.600560876Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:44:53.688066 env[1854]: time="2024-07-02T00:44:53.687899149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmx8s,Uid:201a4794-0cd7-490b-a16f-9b5860bb7a3f,Namespace:kube-system,Attempt:0,}" Jul 2 00:44:53.718630 env[1854]: time="2024-07-02T00:44:53.718482414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:44:53.718844 env[1854]: time="2024-07-02T00:44:53.718652788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:44:53.718844 env[1854]: time="2024-07-02T00:44:53.718718640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:44:53.719252 env[1854]: time="2024-07-02T00:44:53.719157417Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d pid=3284 runtime=io.containerd.runc.v2 Jul 2 00:44:53.810091 env[1854]: time="2024-07-02T00:44:53.810026260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmx8s,Uid:201a4794-0cd7-490b-a16f-9b5860bb7a3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\"" Jul 2 00:44:54.845204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693763642.mount: Deactivated successfully. Jul 2 00:44:55.782013 env[1854]: time="2024-07-02T00:44:55.781950579Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:55.787477 env[1854]: time="2024-07-02T00:44:55.787387974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:55.790566 env[1854]: time="2024-07-02T00:44:55.790507850Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:44:55.791842 env[1854]: time="2024-07-02T00:44:55.791784266Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:44:55.793786 env[1854]: time="2024-07-02T00:44:55.793723715Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:44:55.801884 env[1854]: time="2024-07-02T00:44:55.801819362Z" level=info msg="CreateContainer within sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:44:55.823626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount874984069.mount: Deactivated successfully. Jul 2 00:44:55.837383 env[1854]: time="2024-07-02T00:44:55.837321133Z" level=info msg="CreateContainer within sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\"" Jul 2 00:44:55.838768 env[1854]: time="2024-07-02T00:44:55.838712234Z" level=info msg="StartContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\"" Jul 2 00:44:55.961750 env[1854]: time="2024-07-02T00:44:55.961684730Z" level=info msg="StartContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" returns successfully" Jul 2 00:44:56.814940 systemd[1]: run-containerd-runc-k8s.io-40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce-runc.Jl7lFl.mount: Deactivated successfully. Jul 2 00:45:03.114461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount687681428.mount: Deactivated successfully. Jul 2 00:45:07.171027 env[1854]: time="2024-07-02T00:45:07.170834685Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:07.175569 env[1854]: time="2024-07-02T00:45:07.175497642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:07.178714 env[1854]: time="2024-07-02T00:45:07.178658205Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:07.179992 env[1854]: time="2024-07-02T00:45:07.179939011Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:45:07.185543 env[1854]: time="2024-07-02T00:45:07.185411106Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:45:07.209065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount962528491.mount: Deactivated successfully. Jul 2 00:45:07.227601 env[1854]: time="2024-07-02T00:45:07.227301166Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\"" Jul 2 00:45:07.231245 env[1854]: time="2024-07-02T00:45:07.231186560Z" level=info msg="StartContainer for \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\"" Jul 2 00:45:07.347969 env[1854]: time="2024-07-02T00:45:07.347898935Z" level=info msg="StartContainer for \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\" returns successfully" Jul 2 00:45:07.612450 kubelet[2958]: I0702 00:45:07.612351 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-8n8g2" podStartSLOduration=13.415669242 podCreationTimestamp="2024-07-02 00:44:52 +0000 UTC" firstStartedPulling="2024-07-02 00:44:53.595837015 +0000 UTC m=+14.629679053" lastFinishedPulling="2024-07-02 00:44:55.79243807 +0000 UTC m=+16.826280108" observedRunningTime="2024-07-02 00:44:56.61565396 +0000 UTC m=+17.649495998" watchObservedRunningTime="2024-07-02 00:45:07.612270297 +0000 UTC m=+28.646112335" Jul 2 00:45:08.199760 systemd[1]: run-containerd-runc-k8s.io-91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a-runc.8zOtF2.mount: Deactivated successfully. Jul 2 00:45:08.200049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a-rootfs.mount: Deactivated successfully. Jul 2 00:45:08.336949 env[1854]: time="2024-07-02T00:45:08.336720476Z" level=info msg="shim disconnected" id=91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a Jul 2 00:45:08.337746 env[1854]: time="2024-07-02T00:45:08.336952157Z" level=warning msg="cleaning up after shim disconnected" id=91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a namespace=k8s.io Jul 2 00:45:08.337746 env[1854]: time="2024-07-02T00:45:08.336991868Z" level=info msg="cleaning up dead shim" Jul 2 00:45:08.351071 env[1854]: time="2024-07-02T00:45:08.351006040Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3404 runtime=io.containerd.runc.v2\n" Jul 2 00:45:08.604971 env[1854]: time="2024-07-02T00:45:08.597432140Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:45:08.632563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266440784.mount: Deactivated successfully. Jul 2 00:45:08.654525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4173604589.mount: Deactivated successfully. Jul 2 00:45:08.665454 env[1854]: time="2024-07-02T00:45:08.663842619Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\"" Jul 2 00:45:08.665454 env[1854]: time="2024-07-02T00:45:08.664838517Z" level=info msg="StartContainer for \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\"" Jul 2 00:45:08.774739 env[1854]: time="2024-07-02T00:45:08.774647213Z" level=info msg="StartContainer for \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\" returns successfully" Jul 2 00:45:08.796508 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:45:08.797097 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:45:08.797453 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:45:08.803290 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:45:08.831732 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:45:08.857509 env[1854]: time="2024-07-02T00:45:08.856759780Z" level=info msg="shim disconnected" id=f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4 Jul 2 00:45:08.857936 env[1854]: time="2024-07-02T00:45:08.857893054Z" level=warning msg="cleaning up after shim disconnected" id=f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4 namespace=k8s.io Jul 2 00:45:08.858057 env[1854]: time="2024-07-02T00:45:08.858029663Z" level=info msg="cleaning up dead shim" Jul 2 00:45:08.873790 env[1854]: time="2024-07-02T00:45:08.873721814Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3469 runtime=io.containerd.runc.v2\n" Jul 2 00:45:09.607458 env[1854]: time="2024-07-02T00:45:09.601736725Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:45:09.647275 env[1854]: time="2024-07-02T00:45:09.647173642Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\"" Jul 2 00:45:09.652986 env[1854]: time="2024-07-02T00:45:09.651702053Z" level=info msg="StartContainer for \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\"" Jul 2 00:45:09.829226 env[1854]: time="2024-07-02T00:45:09.829148840Z" level=info msg="StartContainer for \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\" returns successfully" Jul 2 00:45:09.875134 env[1854]: time="2024-07-02T00:45:09.874738927Z" level=info msg="shim disconnected" id=9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28 Jul 2 00:45:09.875134 env[1854]: time="2024-07-02T00:45:09.874808593Z" level=warning msg="cleaning up after shim disconnected" id=9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28 namespace=k8s.io Jul 2 00:45:09.875134 env[1854]: time="2024-07-02T00:45:09.874831251Z" level=info msg="cleaning up dead shim" Jul 2 00:45:09.893059 env[1854]: time="2024-07-02T00:45:09.892984858Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3530 runtime=io.containerd.runc.v2\n" Jul 2 00:45:10.200051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28-rootfs.mount: Deactivated successfully. Jul 2 00:45:10.617435 env[1854]: time="2024-07-02T00:45:10.612495445Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:45:10.659911 env[1854]: time="2024-07-02T00:45:10.659814340Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\"" Jul 2 00:45:10.666461 env[1854]: time="2024-07-02T00:45:10.665813737Z" level=info msg="StartContainer for \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\"" Jul 2 00:45:10.811146 env[1854]: time="2024-07-02T00:45:10.811066700Z" level=info msg="StartContainer for \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\" returns successfully" Jul 2 00:45:10.855770 env[1854]: time="2024-07-02T00:45:10.855679166Z" level=info msg="shim disconnected" id=b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738 Jul 2 00:45:10.855770 env[1854]: time="2024-07-02T00:45:10.855765417Z" level=warning msg="cleaning up after shim disconnected" id=b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738 namespace=k8s.io Jul 2 00:45:10.856286 env[1854]: time="2024-07-02T00:45:10.855791808Z" level=info msg="cleaning up dead shim" Jul 2 00:45:10.872609 env[1854]: time="2024-07-02T00:45:10.872010147Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:45:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3585 runtime=io.containerd.runc.v2\n" Jul 2 00:45:11.199266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738-rootfs.mount: Deactivated successfully. Jul 2 00:45:11.622474 env[1854]: time="2024-07-02T00:45:11.619076987Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:45:11.658716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704218655.mount: Deactivated successfully. Jul 2 00:45:11.685000 env[1854]: time="2024-07-02T00:45:11.684925081Z" level=info msg="CreateContainer within sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\"" Jul 2 00:45:11.685731 env[1854]: time="2024-07-02T00:45:11.685666205Z" level=info msg="StartContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\"" Jul 2 00:45:11.816432 env[1854]: time="2024-07-02T00:45:11.811251189Z" level=info msg="StartContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" returns successfully" Jul 2 00:45:12.035463 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:45:12.097938 kubelet[2958]: I0702 00:45:12.097851 2958 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:45:12.140437 kubelet[2958]: I0702 00:45:12.138834 2958 topology_manager.go:215] "Topology Admit Handler" podUID="6e8a7475-f64e-4979-9008-f91d8aa496fc" podNamespace="kube-system" podName="coredns-5dd5756b68-hmwds" Jul 2 00:45:12.151261 kubelet[2958]: I0702 00:45:12.151195 2958 topology_manager.go:215] "Topology Admit Handler" podUID="031de8b5-164b-48ff-ab0b-0a90fbfd5ceb" podNamespace="kube-system" podName="coredns-5dd5756b68-nxmt7" Jul 2 00:45:12.164628 kubelet[2958]: W0702 00:45:12.164573 2958 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:45:12.164835 kubelet[2958]: E0702 00:45:12.164655 2958 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-19-36" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-36' and this object Jul 2 00:45:12.173366 kubelet[2958]: I0702 00:45:12.173269 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qbl7\" (UniqueName: \"kubernetes.io/projected/6e8a7475-f64e-4979-9008-f91d8aa496fc-kube-api-access-2qbl7\") pod \"coredns-5dd5756b68-hmwds\" (UID: \"6e8a7475-f64e-4979-9008-f91d8aa496fc\") " pod="kube-system/coredns-5dd5756b68-hmwds" Jul 2 00:45:12.173960 kubelet[2958]: I0702 00:45:12.173913 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e8a7475-f64e-4979-9008-f91d8aa496fc-config-volume\") pod \"coredns-5dd5756b68-hmwds\" (UID: \"6e8a7475-f64e-4979-9008-f91d8aa496fc\") " pod="kube-system/coredns-5dd5756b68-hmwds" Jul 2 00:45:12.274543 kubelet[2958]: I0702 00:45:12.274499 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhfs\" (UniqueName: \"kubernetes.io/projected/031de8b5-164b-48ff-ab0b-0a90fbfd5ceb-kube-api-access-vkhfs\") pod \"coredns-5dd5756b68-nxmt7\" (UID: \"031de8b5-164b-48ff-ab0b-0a90fbfd5ceb\") " pod="kube-system/coredns-5dd5756b68-nxmt7" Jul 2 00:45:12.274854 kubelet[2958]: I0702 00:45:12.274830 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/031de8b5-164b-48ff-ab0b-0a90fbfd5ceb-config-volume\") pod \"coredns-5dd5756b68-nxmt7\" (UID: \"031de8b5-164b-48ff-ab0b-0a90fbfd5ceb\") " pod="kube-system/coredns-5dd5756b68-nxmt7" Jul 2 00:45:12.643627 kubelet[2958]: I0702 00:45:12.643582 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-xmx8s" podStartSLOduration=8.275427709 podCreationTimestamp="2024-07-02 00:44:51 +0000 UTC" firstStartedPulling="2024-07-02 00:44:53.812369168 +0000 UTC m=+14.846211206" lastFinishedPulling="2024-07-02 00:45:07.180457122 +0000 UTC m=+28.214299160" observedRunningTime="2024-07-02 00:45:12.641288357 +0000 UTC m=+33.675130431" watchObservedRunningTime="2024-07-02 00:45:12.643515663 +0000 UTC m=+33.677357701" Jul 2 00:45:12.928444 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:45:13.053088 env[1854]: time="2024-07-02T00:45:13.052577447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hmwds,Uid:6e8a7475-f64e-4979-9008-f91d8aa496fc,Namespace:kube-system,Attempt:0,}" Jul 2 00:45:13.067181 env[1854]: time="2024-07-02T00:45:13.067091968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nxmt7,Uid:031de8b5-164b-48ff-ab0b-0a90fbfd5ceb,Namespace:kube-system,Attempt:0,}" Jul 2 00:45:14.781467 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:45:14.775323 systemd-networkd[1518]: cilium_host: Link UP Jul 2 00:45:14.775816 systemd-networkd[1518]: cilium_net: Link UP Jul 2 00:45:14.775825 systemd-networkd[1518]: cilium_net: Gained carrier Jul 2 00:45:14.778918 systemd-networkd[1518]: cilium_host: Gained carrier Jul 2 00:45:14.779938 (udev-worker)[3684]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:14.781772 (udev-worker)[3748]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:14.964425 (udev-worker)[3763]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:14.978023 systemd-networkd[1518]: cilium_vxlan: Link UP Jul 2 00:45:14.978047 systemd-networkd[1518]: cilium_vxlan: Gained carrier Jul 2 00:45:15.180697 systemd-networkd[1518]: cilium_net: Gained IPv6LL Jul 2 00:45:15.212672 systemd-networkd[1518]: cilium_host: Gained IPv6LL Jul 2 00:45:15.485444 kernel: NET: Registered PF_ALG protocol family Jul 2 00:45:16.148676 systemd-networkd[1518]: cilium_vxlan: Gained IPv6LL Jul 2 00:45:16.904478 systemd-networkd[1518]: lxc_health: Link UP Jul 2 00:45:16.922023 systemd-networkd[1518]: lxc_health: Gained carrier Jul 2 00:45:16.922485 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:45:17.181387 systemd-networkd[1518]: lxc1ecfe4c2dd40: Link UP Jul 2 00:45:17.190820 (udev-worker)[4077]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:17.194545 kernel: eth0: renamed from tmp8d9df Jul 2 00:45:17.201547 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ecfe4c2dd40: link becomes ready Jul 2 00:45:17.201695 systemd-networkd[1518]: lxc1ecfe4c2dd40: Gained carrier Jul 2 00:45:17.681012 systemd-networkd[1518]: lxc301cabbf3cfc: Link UP Jul 2 00:45:17.721459 kernel: eth0: renamed from tmp00b27 Jul 2 00:45:17.743464 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc301cabbf3cfc: link becomes ready Jul 2 00:45:17.743139 systemd-networkd[1518]: lxc301cabbf3cfc: Gained carrier Jul 2 00:45:18.580686 systemd-networkd[1518]: lxc1ecfe4c2dd40: Gained IPv6LL Jul 2 00:45:18.772676 systemd-networkd[1518]: lxc_health: Gained IPv6LL Jul 2 00:45:19.668936 systemd-networkd[1518]: lxc301cabbf3cfc: Gained IPv6LL Jul 2 00:45:26.066464 env[1854]: time="2024-07-02T00:45:26.062871657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:45:26.066464 env[1854]: time="2024-07-02T00:45:26.062940919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:45:26.066464 env[1854]: time="2024-07-02T00:45:26.062966899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:45:26.066464 env[1854]: time="2024-07-02T00:45:26.063236434Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00b2776e63156a005fe6959f4e10aabe248b6f17f29b3a77b9101405776e7c96 pid=4122 runtime=io.containerd.runc.v2 Jul 2 00:45:26.210568 env[1854]: time="2024-07-02T00:45:26.210420450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:45:26.210763 env[1854]: time="2024-07-02T00:45:26.210606925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:45:26.210763 env[1854]: time="2024-07-02T00:45:26.210691414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:45:26.211514 env[1854]: time="2024-07-02T00:45:26.211154096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d9dfff1fc367fd236f6e29f4e57bbc3c08accb448ee6a35b825a5412df89471 pid=4156 runtime=io.containerd.runc.v2 Jul 2 00:45:26.327706 env[1854]: time="2024-07-02T00:45:26.327514559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-hmwds,Uid:6e8a7475-f64e-4979-9008-f91d8aa496fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"00b2776e63156a005fe6959f4e10aabe248b6f17f29b3a77b9101405776e7c96\"" Jul 2 00:45:26.335857 env[1854]: time="2024-07-02T00:45:26.335766610Z" level=info msg="CreateContainer within sandbox \"00b2776e63156a005fe6959f4e10aabe248b6f17f29b3a77b9101405776e7c96\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:45:26.392052 env[1854]: time="2024-07-02T00:45:26.391939244Z" level=info msg="CreateContainer within sandbox \"00b2776e63156a005fe6959f4e10aabe248b6f17f29b3a77b9101405776e7c96\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1648f13eed284a9df0635eb635965f8ba36e96c067d75ec7502da44cc293dd3f\"" Jul 2 00:45:26.397324 env[1854]: time="2024-07-02T00:45:26.397263169Z" level=info msg="StartContainer for \"1648f13eed284a9df0635eb635965f8ba36e96c067d75ec7502da44cc293dd3f\"" Jul 2 00:45:26.410552 env[1854]: time="2024-07-02T00:45:26.410477848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nxmt7,Uid:031de8b5-164b-48ff-ab0b-0a90fbfd5ceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9dfff1fc367fd236f6e29f4e57bbc3c08accb448ee6a35b825a5412df89471\"" Jul 2 00:45:26.421511 env[1854]: time="2024-07-02T00:45:26.421300317Z" level=info msg="CreateContainer within sandbox \"8d9dfff1fc367fd236f6e29f4e57bbc3c08accb448ee6a35b825a5412df89471\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:45:26.470355 env[1854]: time="2024-07-02T00:45:26.470282590Z" level=info msg="CreateContainer within sandbox \"8d9dfff1fc367fd236f6e29f4e57bbc3c08accb448ee6a35b825a5412df89471\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86337d6a108cd68e2686b7beff39ed26f7d0ea674debca3a8d06ef915239d4aa\"" Jul 2 00:45:26.471912 env[1854]: time="2024-07-02T00:45:26.471846754Z" level=info msg="StartContainer for \"86337d6a108cd68e2686b7beff39ed26f7d0ea674debca3a8d06ef915239d4aa\"" Jul 2 00:45:26.584849 env[1854]: time="2024-07-02T00:45:26.584695609Z" level=info msg="StartContainer for \"1648f13eed284a9df0635eb635965f8ba36e96c067d75ec7502da44cc293dd3f\" returns successfully" Jul 2 00:45:26.749002 env[1854]: time="2024-07-02T00:45:26.748938495Z" level=info msg="StartContainer for \"86337d6a108cd68e2686b7beff39ed26f7d0ea674debca3a8d06ef915239d4aa\" returns successfully" Jul 2 00:45:27.688434 kubelet[2958]: I0702 00:45:27.688348 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hmwds" podStartSLOduration=35.688267688 podCreationTimestamp="2024-07-02 00:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:45:26.734039583 +0000 UTC m=+47.767881633" watchObservedRunningTime="2024-07-02 00:45:27.688267688 +0000 UTC m=+48.722109738" Jul 2 00:45:27.689097 kubelet[2958]: I0702 00:45:27.688605 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nxmt7" podStartSLOduration=35.688529593 podCreationTimestamp="2024-07-02 00:44:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:45:27.686647302 +0000 UTC m=+48.720489364" watchObservedRunningTime="2024-07-02 00:45:27.688529593 +0000 UTC m=+48.722371643" Jul 2 00:45:27.966281 systemd[1]: Started sshd@5-172.31.19.36:22-139.178.89.65:50344.service. Jul 2 00:45:28.158436 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 50344 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:28.161218 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:28.170869 systemd[1]: Started session-6.scope. Jul 2 00:45:28.171358 systemd-logind[1838]: New session 6 of user core. Jul 2 00:45:28.429519 sshd[4274]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:28.435234 systemd-logind[1838]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:45:28.437576 systemd[1]: sshd@5-172.31.19.36:22-139.178.89.65:50344.service: Deactivated successfully. Jul 2 00:45:28.439231 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:45:28.443079 systemd-logind[1838]: Removed session 6. Jul 2 00:45:33.456242 systemd[1]: Started sshd@6-172.31.19.36:22-139.178.89.65:50400.service. Jul 2 00:45:33.629729 sshd[4293]: Accepted publickey for core from 139.178.89.65 port 50400 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:33.632940 sshd[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:33.641500 systemd-logind[1838]: New session 7 of user core. Jul 2 00:45:33.642564 systemd[1]: Started session-7.scope. Jul 2 00:45:33.888925 sshd[4293]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:33.894342 systemd-logind[1838]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:45:33.894737 systemd[1]: sshd@6-172.31.19.36:22-139.178.89.65:50400.service: Deactivated successfully. Jul 2 00:45:33.896441 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:45:33.898495 systemd-logind[1838]: Removed session 7. Jul 2 00:45:38.916160 systemd[1]: Started sshd@7-172.31.19.36:22-139.178.89.65:36768.service. Jul 2 00:45:39.088513 sshd[4308]: Accepted publickey for core from 139.178.89.65 port 36768 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:39.091630 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:39.099489 systemd-logind[1838]: New session 8 of user core. Jul 2 00:45:39.101377 systemd[1]: Started session-8.scope. Jul 2 00:45:39.353829 sshd[4308]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:39.359194 systemd-logind[1838]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:45:39.361653 systemd[1]: sshd@7-172.31.19.36:22-139.178.89.65:36768.service: Deactivated successfully. Jul 2 00:45:39.363217 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:45:39.365963 systemd-logind[1838]: Removed session 8. Jul 2 00:45:44.379804 systemd[1]: Started sshd@8-172.31.19.36:22-139.178.89.65:36770.service. Jul 2 00:45:44.551143 sshd[4323]: Accepted publickey for core from 139.178.89.65 port 36770 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:44.553788 sshd[4323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:44.568033 systemd[1]: Started session-9.scope. Jul 2 00:45:44.568766 systemd-logind[1838]: New session 9 of user core. Jul 2 00:45:44.821745 sshd[4323]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:44.827171 systemd[1]: sshd@8-172.31.19.36:22-139.178.89.65:36770.service: Deactivated successfully. Jul 2 00:45:44.829260 systemd-logind[1838]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:45:44.832598 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:45:44.834743 systemd-logind[1838]: Removed session 9. Jul 2 00:45:49.848611 systemd[1]: Started sshd@9-172.31.19.36:22-139.178.89.65:52186.service. Jul 2 00:45:50.028360 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 52186 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:50.030744 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:50.040813 systemd[1]: Started session-10.scope. Jul 2 00:45:50.041501 systemd-logind[1838]: New session 10 of user core. Jul 2 00:45:50.321221 sshd[4336]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:50.327021 systemd[1]: sshd@9-172.31.19.36:22-139.178.89.65:52186.service: Deactivated successfully. Jul 2 00:45:50.329357 systemd-logind[1838]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:45:50.329509 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:45:50.331688 systemd-logind[1838]: Removed session 10. Jul 2 00:45:50.340008 systemd[1]: Started sshd@10-172.31.19.36:22-139.178.89.65:52194.service. Jul 2 00:45:50.517558 sshd[4349]: Accepted publickey for core from 139.178.89.65 port 52194 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:50.519883 sshd[4349]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:50.527874 systemd-logind[1838]: New session 11 of user core. Jul 2 00:45:50.529599 systemd[1]: Started session-11.scope. Jul 2 00:45:52.236926 sshd[4349]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:52.244767 systemd[1]: sshd@10-172.31.19.36:22-139.178.89.65:52194.service: Deactivated successfully. Jul 2 00:45:52.246320 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:45:52.247701 systemd-logind[1838]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:45:52.252620 systemd-logind[1838]: Removed session 11. Jul 2 00:45:52.266644 systemd[1]: Started sshd@11-172.31.19.36:22-139.178.89.65:52198.service. Jul 2 00:45:52.442905 sshd[4360]: Accepted publickey for core from 139.178.89.65 port 52198 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:52.445545 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:52.454568 systemd-logind[1838]: New session 12 of user core. Jul 2 00:45:52.455603 systemd[1]: Started session-12.scope. Jul 2 00:45:52.747206 sshd[4360]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:52.756172 systemd-logind[1838]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:45:52.758507 systemd[1]: sshd@11-172.31.19.36:22-139.178.89.65:52198.service: Deactivated successfully. Jul 2 00:45:52.760186 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:45:52.763352 systemd-logind[1838]: Removed session 12. Jul 2 00:45:57.774847 systemd[1]: Started sshd@12-172.31.19.36:22-139.178.89.65:52210.service. Jul 2 00:45:57.951919 sshd[4376]: Accepted publickey for core from 139.178.89.65 port 52210 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:57.955091 sshd[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:57.964559 systemd-logind[1838]: New session 13 of user core. Jul 2 00:45:57.964958 systemd[1]: Started session-13.scope. Jul 2 00:45:58.229227 sshd[4376]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:58.235027 systemd-logind[1838]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:45:58.236569 systemd[1]: sshd@12-172.31.19.36:22-139.178.89.65:52210.service: Deactivated successfully. Jul 2 00:45:58.238949 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:45:58.240639 systemd-logind[1838]: Removed session 13. Jul 2 00:46:03.255376 systemd[1]: Started sshd@13-172.31.19.36:22-139.178.89.65:60394.service. Jul 2 00:46:03.429039 sshd[4389]: Accepted publickey for core from 139.178.89.65 port 60394 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:03.432275 sshd[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:03.441260 systemd-logind[1838]: New session 14 of user core. Jul 2 00:46:03.442375 systemd[1]: Started session-14.scope. Jul 2 00:46:03.714763 sshd[4389]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:03.719990 systemd-logind[1838]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:46:03.720466 systemd[1]: sshd@13-172.31.19.36:22-139.178.89.65:60394.service: Deactivated successfully. Jul 2 00:46:03.722806 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:46:03.724835 systemd-logind[1838]: Removed session 14. Jul 2 00:46:08.740574 systemd[1]: Started sshd@14-172.31.19.36:22-139.178.89.65:49728.service. Jul 2 00:46:08.915175 sshd[4402]: Accepted publickey for core from 139.178.89.65 port 49728 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:08.917046 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:08.925081 systemd-logind[1838]: New session 15 of user core. Jul 2 00:46:08.927109 systemd[1]: Started session-15.scope. Jul 2 00:46:09.179732 sshd[4402]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:09.185034 systemd-logind[1838]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:46:09.186484 systemd[1]: sshd@14-172.31.19.36:22-139.178.89.65:49728.service: Deactivated successfully. Jul 2 00:46:09.188097 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:46:09.189756 systemd-logind[1838]: Removed session 15. Jul 2 00:46:09.206146 systemd[1]: Started sshd@15-172.31.19.36:22-139.178.89.65:49738.service. Jul 2 00:46:09.375773 sshd[4415]: Accepted publickey for core from 139.178.89.65 port 49738 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:09.379004 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:09.389036 systemd[1]: Started session-16.scope. Jul 2 00:46:09.389728 systemd-logind[1838]: New session 16 of user core. Jul 2 00:46:09.692747 sshd[4415]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:09.697589 systemd[1]: sshd@15-172.31.19.36:22-139.178.89.65:49738.service: Deactivated successfully. Jul 2 00:46:09.700453 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:46:09.701002 systemd-logind[1838]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:46:09.704966 systemd-logind[1838]: Removed session 16. Jul 2 00:46:09.720806 systemd[1]: Started sshd@16-172.31.19.36:22-139.178.89.65:49752.service. Jul 2 00:46:09.906374 sshd[4426]: Accepted publickey for core from 139.178.89.65 port 49752 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:09.909872 sshd[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:09.919175 systemd-logind[1838]: New session 17 of user core. Jul 2 00:46:09.919349 systemd[1]: Started session-17.scope. Jul 2 00:46:11.365262 sshd[4426]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:11.370605 systemd-logind[1838]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:46:11.370938 systemd[1]: sshd@16-172.31.19.36:22-139.178.89.65:49752.service: Deactivated successfully. Jul 2 00:46:11.373892 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:46:11.375123 systemd-logind[1838]: Removed session 17. Jul 2 00:46:11.389351 systemd[1]: Started sshd@17-172.31.19.36:22-139.178.89.65:49764.service. Jul 2 00:46:11.571586 sshd[4444]: Accepted publickey for core from 139.178.89.65 port 49764 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:11.574979 sshd[4444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:11.584227 systemd[1]: Started session-18.scope. Jul 2 00:46:11.584727 systemd-logind[1838]: New session 18 of user core. Jul 2 00:46:12.215635 sshd[4444]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:12.220864 systemd[1]: sshd@17-172.31.19.36:22-139.178.89.65:49764.service: Deactivated successfully. Jul 2 00:46:12.224136 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:46:12.225023 systemd-logind[1838]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:46:12.228550 systemd-logind[1838]: Removed session 18. Jul 2 00:46:12.239948 systemd[1]: Started sshd@18-172.31.19.36:22-139.178.89.65:49772.service. Jul 2 00:46:12.411018 sshd[4455]: Accepted publickey for core from 139.178.89.65 port 49772 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:12.414193 sshd[4455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:12.423665 systemd[1]: Started session-19.scope. Jul 2 00:46:12.424722 systemd-logind[1838]: New session 19 of user core. Jul 2 00:46:12.669538 sshd[4455]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:12.676366 systemd[1]: sshd@18-172.31.19.36:22-139.178.89.65:49772.service: Deactivated successfully. Jul 2 00:46:12.681647 systemd-logind[1838]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:46:12.683037 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:46:12.687057 systemd-logind[1838]: Removed session 19. Jul 2 00:46:17.694164 systemd[1]: Started sshd@19-172.31.19.36:22-139.178.89.65:49780.service. Jul 2 00:46:17.863498 sshd[4468]: Accepted publickey for core from 139.178.89.65 port 49780 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:17.866142 sshd[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:17.875668 systemd[1]: Started session-20.scope. Jul 2 00:46:17.876294 systemd-logind[1838]: New session 20 of user core. Jul 2 00:46:18.116554 sshd[4468]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:18.121923 systemd[1]: sshd@19-172.31.19.36:22-139.178.89.65:49780.service: Deactivated successfully. Jul 2 00:46:18.123889 systemd-logind[1838]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:46:18.124064 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:46:18.127175 systemd-logind[1838]: Removed session 20. Jul 2 00:46:23.141687 systemd[1]: Started sshd@20-172.31.19.36:22-139.178.89.65:33334.service. Jul 2 00:46:23.311870 sshd[4485]: Accepted publickey for core from 139.178.89.65 port 33334 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:23.315058 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:23.324792 systemd[1]: Started session-21.scope. Jul 2 00:46:23.326668 systemd-logind[1838]: New session 21 of user core. Jul 2 00:46:23.575036 sshd[4485]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:23.580663 systemd-logind[1838]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:46:23.581372 systemd[1]: sshd@20-172.31.19.36:22-139.178.89.65:33334.service: Deactivated successfully. Jul 2 00:46:23.584026 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:46:23.585935 systemd-logind[1838]: Removed session 21. Jul 2 00:46:28.601493 systemd[1]: Started sshd@21-172.31.19.36:22-139.178.89.65:56312.service. Jul 2 00:46:28.778906 sshd[4498]: Accepted publickey for core from 139.178.89.65 port 56312 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:28.782056 sshd[4498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:28.790996 systemd-logind[1838]: New session 22 of user core. Jul 2 00:46:28.792598 systemd[1]: Started session-22.scope. Jul 2 00:46:29.043683 sshd[4498]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:29.049387 systemd-logind[1838]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:46:29.050075 systemd[1]: sshd@21-172.31.19.36:22-139.178.89.65:56312.service: Deactivated successfully. Jul 2 00:46:29.052305 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:46:29.054617 systemd-logind[1838]: Removed session 22. Jul 2 00:46:34.069180 systemd[1]: Started sshd@22-172.31.19.36:22-139.178.89.65:56326.service. Jul 2 00:46:34.239690 sshd[4512]: Accepted publickey for core from 139.178.89.65 port 56326 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:34.243042 sshd[4512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:34.250537 systemd-logind[1838]: New session 23 of user core. Jul 2 00:46:34.252562 systemd[1]: Started session-23.scope. Jul 2 00:46:34.495095 sshd[4512]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:34.500681 systemd[1]: sshd@22-172.31.19.36:22-139.178.89.65:56326.service: Deactivated successfully. Jul 2 00:46:34.503677 systemd-logind[1838]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:46:34.505489 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:46:34.507948 systemd-logind[1838]: Removed session 23. Jul 2 00:46:34.523100 systemd[1]: Started sshd@23-172.31.19.36:22-139.178.89.65:56334.service. Jul 2 00:46:34.699498 sshd[4525]: Accepted publickey for core from 139.178.89.65 port 56334 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:34.701163 sshd[4525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:34.710990 systemd[1]: Started session-24.scope. Jul 2 00:46:34.711825 systemd-logind[1838]: New session 24 of user core. Jul 2 00:46:38.630750 systemd[1]: run-containerd-runc-k8s.io-d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5-runc.rl4zW5.mount: Deactivated successfully. Jul 2 00:46:38.636175 env[1854]: time="2024-07-02T00:46:38.634465414Z" level=info msg="StopContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" with timeout 30 (s)" Jul 2 00:46:38.636175 env[1854]: time="2024-07-02T00:46:38.635334458Z" level=info msg="Stop container \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" with signal terminated" Jul 2 00:46:38.670446 env[1854]: time="2024-07-02T00:46:38.670150159Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:46:38.685678 env[1854]: time="2024-07-02T00:46:38.685613920Z" level=info msg="StopContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" with timeout 2 (s)" Jul 2 00:46:38.686520 env[1854]: time="2024-07-02T00:46:38.686379255Z" level=info msg="Stop container \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" with signal terminated" Jul 2 00:46:38.702661 systemd-networkd[1518]: lxc_health: Link DOWN Jul 2 00:46:38.702674 systemd-networkd[1518]: lxc_health: Lost carrier Jul 2 00:46:38.765366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce-rootfs.mount: Deactivated successfully. Jul 2 00:46:38.786259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5-rootfs.mount: Deactivated successfully. Jul 2 00:46:38.800089 env[1854]: time="2024-07-02T00:46:38.800001024Z" level=info msg="shim disconnected" id=40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce Jul 2 00:46:38.800359 env[1854]: time="2024-07-02T00:46:38.800087356Z" level=warning msg="cleaning up after shim disconnected" id=40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce namespace=k8s.io Jul 2 00:46:38.800359 env[1854]: time="2024-07-02T00:46:38.800111333Z" level=info msg="cleaning up dead shim" Jul 2 00:46:38.801311 env[1854]: time="2024-07-02T00:46:38.801245721Z" level=info msg="shim disconnected" id=d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5 Jul 2 00:46:38.801585 env[1854]: time="2024-07-02T00:46:38.801546167Z" level=warning msg="cleaning up after shim disconnected" id=d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5 namespace=k8s.io Jul 2 00:46:38.801760 env[1854]: time="2024-07-02T00:46:38.801728132Z" level=info msg="cleaning up dead shim" Jul 2 00:46:38.822222 env[1854]: time="2024-07-02T00:46:38.822167371Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4595 runtime=io.containerd.runc.v2\n" Jul 2 00:46:38.825545 env[1854]: time="2024-07-02T00:46:38.825482357Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4594 runtime=io.containerd.runc.v2\n" Jul 2 00:46:38.826033 env[1854]: time="2024-07-02T00:46:38.825988300Z" level=info msg="StopContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" returns successfully" Jul 2 00:46:38.827109 env[1854]: time="2024-07-02T00:46:38.827052029Z" level=info msg="StopPodSandbox for \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\"" Jul 2 00:46:38.827288 env[1854]: time="2024-07-02T00:46:38.827157958Z" level=info msg="Container to stop \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.827288 env[1854]: time="2024-07-02T00:46:38.827196924Z" level=info msg="Container to stop \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.827288 env[1854]: time="2024-07-02T00:46:38.827224825Z" level=info msg="Container to stop \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.827288 env[1854]: time="2024-07-02T00:46:38.827253291Z" level=info msg="Container to stop \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.827288 env[1854]: time="2024-07-02T00:46:38.827279728Z" level=info msg="Container to stop \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.831439 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d-shm.mount: Deactivated successfully. Jul 2 00:46:38.836826 env[1854]: time="2024-07-02T00:46:38.836695448Z" level=info msg="StopContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" returns successfully" Jul 2 00:46:38.837445 env[1854]: time="2024-07-02T00:46:38.837370203Z" level=info msg="StopPodSandbox for \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\"" Jul 2 00:46:38.837571 env[1854]: time="2024-07-02T00:46:38.837486441Z" level=info msg="Container to stop \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:38.917326 env[1854]: time="2024-07-02T00:46:38.915839859Z" level=info msg="shim disconnected" id=fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d Jul 2 00:46:38.917629 env[1854]: time="2024-07-02T00:46:38.917486372Z" level=warning msg="cleaning up after shim disconnected" id=fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d namespace=k8s.io Jul 2 00:46:38.917629 env[1854]: time="2024-07-02T00:46:38.917536186Z" level=info msg="cleaning up dead shim" Jul 2 00:46:38.930104 env[1854]: time="2024-07-02T00:46:38.930025765Z" level=info msg="shim disconnected" id=cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a Jul 2 00:46:38.930372 env[1854]: time="2024-07-02T00:46:38.930104104Z" level=warning msg="cleaning up after shim disconnected" id=cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a namespace=k8s.io Jul 2 00:46:38.930372 env[1854]: time="2024-07-02T00:46:38.930126881Z" level=info msg="cleaning up dead shim" Jul 2 00:46:38.944254 env[1854]: time="2024-07-02T00:46:38.944167064Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4660 runtime=io.containerd.runc.v2\n" Jul 2 00:46:38.944844 env[1854]: time="2024-07-02T00:46:38.944781312Z" level=info msg="TearDown network for sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" successfully" Jul 2 00:46:38.944946 env[1854]: time="2024-07-02T00:46:38.944837943Z" level=info msg="StopPodSandbox for \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" returns successfully" Jul 2 00:46:38.974562 env[1854]: time="2024-07-02T00:46:38.974505106Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4668 runtime=io.containerd.runc.v2\n" Jul 2 00:46:38.975567 env[1854]: time="2024-07-02T00:46:38.975496532Z" level=info msg="TearDown network for sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" successfully" Jul 2 00:46:38.976014 env[1854]: time="2024-07-02T00:46:38.975914259Z" level=info msg="StopPodSandbox for \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" returns successfully" Jul 2 00:46:39.072670 kubelet[2958]: I0702 00:46:39.072612 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-lib-modules\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072698 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lws5w\" (UniqueName: \"kubernetes.io/projected/8d5b8788-11f1-488b-b8f0-997f2899d6f4-kube-api-access-lws5w\") pod \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\" (UID: \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072747 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hostproc\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072789 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-cgroup\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072832 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-xtables-lock\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072869 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cni-path\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073352 kubelet[2958]: I0702 00:46:39.072909 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-net\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.072958 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.073000 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-etc-cni-netd\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.073045 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.073087 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-run\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.073132 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d5b8788-11f1-488b-b8f0-997f2899d6f4-cilium-config-path\") pod \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\" (UID: \"8d5b8788-11f1-488b-b8f0-997f2899d6f4\") " Jul 2 00:46:39.073897 kubelet[2958]: I0702 00:46:39.073171 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-bpf-maps\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.074303 kubelet[2958]: I0702 00:46:39.073251 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.074303 kubelet[2958]: I0702 00:46:39.073298 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-kernel\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.074303 kubelet[2958]: I0702 00:46:39.073345 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvqkl\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-kube-api-access-bvqkl\") pod \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\" (UID: \"201a4794-0cd7-490b-a16f-9b5860bb7a3f\") " Jul 2 00:46:39.075435 kubelet[2958]: I0702 00:46:39.074622 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.075833 kubelet[2958]: I0702 00:46:39.075763 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.075969 kubelet[2958]: I0702 00:46:39.075846 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hostproc" (OuterVolumeSpecName: "hostproc") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.075969 kubelet[2958]: I0702 00:46:39.075893 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.075969 kubelet[2958]: I0702 00:46:39.075938 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.076181 kubelet[2958]: I0702 00:46:39.076006 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cni-path" (OuterVolumeSpecName: "cni-path") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.076181 kubelet[2958]: I0702 00:46:39.076047 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.079961 kubelet[2958]: I0702 00:46:39.079891 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.080523 kubelet[2958]: I0702 00:46:39.080469 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.086457 kubelet[2958]: I0702 00:46:39.086353 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:39.086843 kubelet[2958]: I0702 00:46:39.086802 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-kube-api-access-bvqkl" (OuterVolumeSpecName: "kube-api-access-bvqkl") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "kube-api-access-bvqkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:39.087590 kubelet[2958]: I0702 00:46:39.087529 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:39.093144 kubelet[2958]: I0702 00:46:39.093076 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:39.094006 kubelet[2958]: I0702 00:46:39.093942 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "201a4794-0cd7-490b-a16f-9b5860bb7a3f" (UID: "201a4794-0cd7-490b-a16f-9b5860bb7a3f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:39.094855 kubelet[2958]: I0702 00:46:39.094800 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d5b8788-11f1-488b-b8f0-997f2899d6f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8d5b8788-11f1-488b-b8f0-997f2899d6f4" (UID: "8d5b8788-11f1-488b-b8f0-997f2899d6f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:39.098081 kubelet[2958]: I0702 00:46:39.098023 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5b8788-11f1-488b-b8f0-997f2899d6f4-kube-api-access-lws5w" (OuterVolumeSpecName: "kube-api-access-lws5w") pod "8d5b8788-11f1-488b-b8f0-997f2899d6f4" (UID: "8d5b8788-11f1-488b-b8f0-997f2899d6f4"). InnerVolumeSpecName "kube-api-access-lws5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:39.176294 kubelet[2958]: I0702 00:46:39.174587 2958 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bvqkl\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-kube-api-access-bvqkl\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.176705 kubelet[2958]: I0702 00:46:39.176668 2958 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-bpf-maps\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.176966 kubelet[2958]: I0702 00:46:39.176926 2958 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/201a4794-0cd7-490b-a16f-9b5860bb7a3f-clustermesh-secrets\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.177127 kubelet[2958]: I0702 00:46:39.177100 2958 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-kernel\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.177286 kubelet[2958]: I0702 00:46:39.177265 2958 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hostproc\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.177494 kubelet[2958]: I0702 00:46:39.177455 2958 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-lib-modules\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.177667 kubelet[2958]: I0702 00:46:39.177644 2958 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lws5w\" (UniqueName: \"kubernetes.io/projected/8d5b8788-11f1-488b-b8f0-997f2899d6f4-kube-api-access-lws5w\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.177858 kubelet[2958]: I0702 00:46:39.177835 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-cgroup\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178019 kubelet[2958]: I0702 00:46:39.177997 2958 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-xtables-lock\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178176 kubelet[2958]: I0702 00:46:39.178156 2958 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cni-path\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178319 kubelet[2958]: I0702 00:46:39.178299 2958 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-host-proc-sys-net\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178521 kubelet[2958]: I0702 00:46:39.178482 2958 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/201a4794-0cd7-490b-a16f-9b5860bb7a3f-hubble-tls\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178688 kubelet[2958]: I0702 00:46:39.178668 2958 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-etc-cni-netd\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178842 kubelet[2958]: I0702 00:46:39.178822 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-config-path\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.178990 kubelet[2958]: I0702 00:46:39.178971 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/201a4794-0cd7-490b-a16f-9b5860bb7a3f-cilium-run\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.179132 kubelet[2958]: I0702 00:46:39.179112 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8d5b8788-11f1-488b-b8f0-997f2899d6f4-cilium-config-path\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:39.355177 kubelet[2958]: I0702 00:46:39.355141 2958 scope.go:117] "RemoveContainer" containerID="91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a" Jul 2 00:46:39.357639 env[1854]: time="2024-07-02T00:46:39.357572265Z" level=info msg="RemoveContainer for \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\"" Jul 2 00:46:39.367644 env[1854]: time="2024-07-02T00:46:39.367567231Z" level=info msg="RemoveContainer for \"91bceddc18cf01d42f8d091cf4f2c30ac739490522d1869c7cd143712e7eef4a\" returns successfully" Jul 2 00:46:39.368104 kubelet[2958]: I0702 00:46:39.368071 2958 scope.go:117] "RemoveContainer" containerID="f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4" Jul 2 00:46:39.370360 env[1854]: time="2024-07-02T00:46:39.370290254Z" level=info msg="RemoveContainer for \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\"" Jul 2 00:46:39.375091 env[1854]: time="2024-07-02T00:46:39.375014958Z" level=info msg="RemoveContainer for \"f2a36f6dfa66dfe0e07a2d7bb482498c56311b044c202a02e724c51431a69dc4\" returns successfully" Jul 2 00:46:39.375536 kubelet[2958]: I0702 00:46:39.375507 2958 scope.go:117] "RemoveContainer" containerID="b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738" Jul 2 00:46:39.377533 env[1854]: time="2024-07-02T00:46:39.377477365Z" level=info msg="RemoveContainer for \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\"" Jul 2 00:46:39.382596 env[1854]: time="2024-07-02T00:46:39.382523120Z" level=info msg="RemoveContainer for \"b1e46b3241ec819304b56c888e6bd88043acb2d562e20828dc0dcf228e537738\" returns successfully" Jul 2 00:46:39.383018 kubelet[2958]: I0702 00:46:39.382985 2958 scope.go:117] "RemoveContainer" containerID="40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce" Jul 2 00:46:39.385647 env[1854]: time="2024-07-02T00:46:39.385232330Z" level=info msg="RemoveContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\"" Jul 2 00:46:39.389987 env[1854]: time="2024-07-02T00:46:39.389924969Z" level=info msg="RemoveContainer for \"40cfd4af997e12a50ef3cf71bd9b9edf3fa7ca2d1ee99a06f26f5999c254a7ce\" returns successfully" Jul 2 00:46:39.390537 kubelet[2958]: I0702 00:46:39.390507 2958 scope.go:117] "RemoveContainer" containerID="9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28" Jul 2 00:46:39.392829 env[1854]: time="2024-07-02T00:46:39.392739988Z" level=info msg="RemoveContainer for \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\"" Jul 2 00:46:39.398048 env[1854]: time="2024-07-02T00:46:39.397978736Z" level=info msg="RemoveContainer for \"9e8602bfe5aa72ed685ce228800465b36793b164e817fb7dad04832bbdd9ae28\" returns successfully" Jul 2 00:46:39.398484 kubelet[2958]: I0702 00:46:39.398453 2958 scope.go:117] "RemoveContainer" containerID="d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5" Jul 2 00:46:39.400715 env[1854]: time="2024-07-02T00:46:39.400656097Z" level=info msg="RemoveContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\"" Jul 2 00:46:39.405601 env[1854]: time="2024-07-02T00:46:39.405528552Z" level=info msg="RemoveContainer for \"d10674ea11f01b46dadf6cf63819dade37e9c0047a4ada00a7bc33ece447bbe5\" returns successfully" Jul 2 00:46:39.407865 env[1854]: time="2024-07-02T00:46:39.407809846Z" level=info msg="StopPodSandbox for \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\"" Jul 2 00:46:39.408020 env[1854]: time="2024-07-02T00:46:39.407956577Z" level=info msg="TearDown network for sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" successfully" Jul 2 00:46:39.408109 env[1854]: time="2024-07-02T00:46:39.408014588Z" level=info msg="StopPodSandbox for \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" returns successfully" Jul 2 00:46:39.408816 env[1854]: time="2024-07-02T00:46:39.408762823Z" level=info msg="RemovePodSandbox for \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\"" Jul 2 00:46:39.408971 env[1854]: time="2024-07-02T00:46:39.408821481Z" level=info msg="Forcibly stopping sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\"" Jul 2 00:46:39.408971 env[1854]: time="2024-07-02T00:46:39.408949419Z" level=info msg="TearDown network for sandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" successfully" Jul 2 00:46:39.414241 env[1854]: time="2024-07-02T00:46:39.414123221Z" level=info msg="RemovePodSandbox \"fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d\" returns successfully" Jul 2 00:46:39.415157 env[1854]: time="2024-07-02T00:46:39.415091306Z" level=info msg="StopPodSandbox for \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\"" Jul 2 00:46:39.415320 env[1854]: time="2024-07-02T00:46:39.415248693Z" level=info msg="TearDown network for sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" successfully" Jul 2 00:46:39.415320 env[1854]: time="2024-07-02T00:46:39.415307820Z" level=info msg="StopPodSandbox for \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" returns successfully" Jul 2 00:46:39.416003 env[1854]: time="2024-07-02T00:46:39.415949634Z" level=info msg="RemovePodSandbox for \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\"" Jul 2 00:46:39.416154 env[1854]: time="2024-07-02T00:46:39.416010717Z" level=info msg="Forcibly stopping sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\"" Jul 2 00:46:39.416229 env[1854]: time="2024-07-02T00:46:39.416142891Z" level=info msg="TearDown network for sandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" successfully" Jul 2 00:46:39.421594 env[1854]: time="2024-07-02T00:46:39.421491672Z" level=info msg="RemovePodSandbox \"cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a\" returns successfully" Jul 2 00:46:39.615540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa769556c53da4120e31ae31b6056439bd1c9ae54ecb44b6b086e39bf9ead58d-rootfs.mount: Deactivated successfully. Jul 2 00:46:39.615821 systemd[1]: var-lib-kubelet-pods-201a4794\x2d0cd7\x2d490b\x2da16f\x2d9b5860bb7a3f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:39.616059 systemd[1]: var-lib-kubelet-pods-201a4794\x2d0cd7\x2d490b\x2da16f\x2d9b5860bb7a3f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:46:39.616289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a-rootfs.mount: Deactivated successfully. Jul 2 00:46:39.616576 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cec40469f468bf0512e80e335d1b2e4681e031cb72e95b1a5361340052a89d6a-shm.mount: Deactivated successfully. Jul 2 00:46:39.616808 systemd[1]: var-lib-kubelet-pods-8d5b8788\x2d11f1\x2d488b\x2db8f0\x2d997f2899d6f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlws5w.mount: Deactivated successfully. Jul 2 00:46:39.617041 systemd[1]: var-lib-kubelet-pods-201a4794\x2d0cd7\x2d490b\x2da16f\x2d9b5860bb7a3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbvqkl.mount: Deactivated successfully. Jul 2 00:46:39.681604 kubelet[2958]: E0702 00:46:39.681538 2958 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:46:40.559290 sshd[4525]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:40.564552 systemd[1]: sshd@23-172.31.19.36:22-139.178.89.65:56334.service: Deactivated successfully. Jul 2 00:46:40.567147 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:46:40.567183 systemd-logind[1838]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:46:40.569959 systemd-logind[1838]: Removed session 24. Jul 2 00:46:40.585200 systemd[1]: Started sshd@24-172.31.19.36:22-139.178.89.65:34284.service. Jul 2 00:46:40.756263 sshd[4696]: Accepted publickey for core from 139.178.89.65 port 34284 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:40.759016 sshd[4696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:40.768909 systemd[1]: Started session-25.scope. Jul 2 00:46:40.770065 systemd-logind[1838]: New session 25 of user core. Jul 2 00:46:41.426511 kubelet[2958]: I0702 00:46:41.426462 2958 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" path="/var/lib/kubelet/pods/201a4794-0cd7-490b-a16f-9b5860bb7a3f/volumes" Jul 2 00:46:41.428107 kubelet[2958]: I0702 00:46:41.428065 2958 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8d5b8788-11f1-488b-b8f0-997f2899d6f4" path="/var/lib/kubelet/pods/8d5b8788-11f1-488b-b8f0-997f2899d6f4/volumes" Jul 2 00:46:42.010315 kubelet[2958]: I0702 00:46:42.010272 2958 topology_manager.go:215] "Topology Admit Handler" podUID="7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" podNamespace="kube-system" podName="cilium-lptws" Jul 2 00:46:42.010635 kubelet[2958]: E0702 00:46:42.010605 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="mount-cgroup" Jul 2 00:46:42.010805 kubelet[2958]: E0702 00:46:42.010782 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="apply-sysctl-overwrites" Jul 2 00:46:42.010930 kubelet[2958]: E0702 00:46:42.010910 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="cilium-agent" Jul 2 00:46:42.011056 kubelet[2958]: E0702 00:46:42.011035 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8d5b8788-11f1-488b-b8f0-997f2899d6f4" containerName="cilium-operator" Jul 2 00:46:42.011199 kubelet[2958]: E0702 00:46:42.011176 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="mount-bpf-fs" Jul 2 00:46:42.013576 kubelet[2958]: E0702 00:46:42.011310 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="clean-cilium-state" Jul 2 00:46:42.012037 sshd[4696]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:42.014499 kubelet[2958]: I0702 00:46:42.014464 2958 memory_manager.go:346] "RemoveStaleState removing state" podUID="8d5b8788-11f1-488b-b8f0-997f2899d6f4" containerName="cilium-operator" Jul 2 00:46:42.014661 kubelet[2958]: I0702 00:46:42.014640 2958 memory_manager.go:346] "RemoveStaleState removing state" podUID="201a4794-0cd7-490b-a16f-9b5860bb7a3f" containerName="cilium-agent" Jul 2 00:46:42.019644 systemd[1]: sshd@24-172.31.19.36:22-139.178.89.65:34284.service: Deactivated successfully. Jul 2 00:46:42.021669 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:46:42.021807 systemd-logind[1838]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:46:42.024948 systemd-logind[1838]: Removed session 25. Jul 2 00:46:42.035830 systemd[1]: Started sshd@25-172.31.19.36:22-139.178.89.65:34292.service. Jul 2 00:46:42.100279 kubelet[2958]: I0702 00:46:42.100241 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-cgroup\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.100596 kubelet[2958]: I0702 00:46:42.100538 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-ipsec-secrets\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.100790 kubelet[2958]: I0702 00:46:42.100756 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tntzl\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-kube-api-access-tntzl\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.101068 kubelet[2958]: I0702 00:46:42.101045 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-bpf-maps\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.101277 kubelet[2958]: I0702 00:46:42.101238 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-lib-modules\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.101474 kubelet[2958]: I0702 00:46:42.101453 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hubble-tls\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.101689 kubelet[2958]: I0702 00:46:42.101649 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-xtables-lock\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.101935 kubelet[2958]: I0702 00:46:42.101910 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cni-path\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.102141 kubelet[2958]: I0702 00:46:42.102103 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-config-path\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.102342 kubelet[2958]: I0702 00:46:42.102300 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-kernel\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.102578 kubelet[2958]: I0702 00:46:42.102555 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-clustermesh-secrets\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.102797 kubelet[2958]: I0702 00:46:42.102762 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-run\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.102987 kubelet[2958]: I0702 00:46:42.102952 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hostproc\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.103177 kubelet[2958]: I0702 00:46:42.103140 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-etc-cni-netd\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.103368 kubelet[2958]: I0702 00:46:42.103330 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-net\") pod \"cilium-lptws\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " pod="kube-system/cilium-lptws" Jul 2 00:46:42.282651 sshd[4707]: Accepted publickey for core from 139.178.89.65 port 34292 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:42.284264 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:42.292523 systemd-logind[1838]: New session 26 of user core. Jul 2 00:46:42.293753 systemd[1]: Started session-26.scope. Jul 2 00:46:42.330870 env[1854]: time="2024-07-02T00:46:42.330351581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lptws,Uid:7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:42.366695 env[1854]: time="2024-07-02T00:46:42.366545644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:42.366695 env[1854]: time="2024-07-02T00:46:42.366641624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:42.367482 env[1854]: time="2024-07-02T00:46:42.366963095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:42.368796 env[1854]: time="2024-07-02T00:46:42.368614554Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df pid=4723 runtime=io.containerd.runc.v2 Jul 2 00:46:42.467307 env[1854]: time="2024-07-02T00:46:42.467236090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lptws,Uid:7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f,Namespace:kube-system,Attempt:0,} returns sandbox id \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\"" Jul 2 00:46:42.489422 env[1854]: time="2024-07-02T00:46:42.486701648Z" level=info msg="CreateContainer within sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:46:42.519060 env[1854]: time="2024-07-02T00:46:42.518972749Z" level=info msg="CreateContainer within sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\"" Jul 2 00:46:42.522602 env[1854]: time="2024-07-02T00:46:42.522531433Z" level=info msg="StartContainer for \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\"" Jul 2 00:46:42.627933 sshd[4707]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:42.632569 systemd[1]: sshd@25-172.31.19.36:22-139.178.89.65:34292.service: Deactivated successfully. Jul 2 00:46:42.635377 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:46:42.636135 systemd-logind[1838]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:46:42.640886 systemd-logind[1838]: Removed session 26. Jul 2 00:46:42.654542 systemd[1]: Started sshd@26-172.31.19.36:22-139.178.89.65:34308.service. Jul 2 00:46:42.693730 kubelet[2958]: I0702 00:46:42.693676 2958 setters.go:552] "Node became not ready" node="ip-172-31-19-36" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:46:42Z","lastTransitionTime":"2024-07-02T00:46:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:46:42.727199 env[1854]: time="2024-07-02T00:46:42.727127842Z" level=info msg="StartContainer for \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\" returns successfully" Jul 2 00:46:42.825653 env[1854]: time="2024-07-02T00:46:42.825562769Z" level=info msg="shim disconnected" id=98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df Jul 2 00:46:42.825653 env[1854]: time="2024-07-02T00:46:42.825635877Z" level=warning msg="cleaning up after shim disconnected" id=98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df namespace=k8s.io Jul 2 00:46:42.826030 env[1854]: time="2024-07-02T00:46:42.825660418Z" level=info msg="cleaning up dead shim" Jul 2 00:46:42.846383 env[1854]: time="2024-07-02T00:46:42.846327281Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4817 runtime=io.containerd.runc.v2\n" Jul 2 00:46:42.896850 sshd[4791]: Accepted publickey for core from 139.178.89.65 port 34308 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:46:42.901647 sshd[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:46:42.905233 env[1854]: time="2024-07-02T00:46:42.905175717Z" level=info msg="StopPodSandbox for \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\"" Jul 2 00:46:42.905565 env[1854]: time="2024-07-02T00:46:42.905523818Z" level=info msg="Container to stop \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:46:42.917557 systemd[1]: Started session-27.scope. Jul 2 00:46:42.919026 systemd-logind[1838]: New session 27 of user core. Jul 2 00:46:43.001081 env[1854]: time="2024-07-02T00:46:43.000992997Z" level=info msg="shim disconnected" id=800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df Jul 2 00:46:43.001531 env[1854]: time="2024-07-02T00:46:43.001490984Z" level=warning msg="cleaning up after shim disconnected" id=800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df namespace=k8s.io Jul 2 00:46:43.001701 env[1854]: time="2024-07-02T00:46:43.001667801Z" level=info msg="cleaning up dead shim" Jul 2 00:46:43.016779 env[1854]: time="2024-07-02T00:46:43.016716205Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4853 runtime=io.containerd.runc.v2\n" Jul 2 00:46:43.017699 env[1854]: time="2024-07-02T00:46:43.017650774Z" level=info msg="TearDown network for sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" successfully" Jul 2 00:46:43.017927 env[1854]: time="2024-07-02T00:46:43.017838175Z" level=info msg="StopPodSandbox for \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" returns successfully" Jul 2 00:46:43.126188 kubelet[2958]: I0702 00:46:43.126135 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-config-path\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126210 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-run\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126255 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-net\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126304 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tntzl\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-kube-api-access-tntzl\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126355 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-clustermesh-secrets\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126416 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-etc-cni-netd\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126473 kubelet[2958]: I0702 00:46:43.126466 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-ipsec-secrets\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126505 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-lib-modules\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126546 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hubble-tls\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126590 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hostproc\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126631 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-cgroup\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126670 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-bpf-maps\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.126841 kubelet[2958]: I0702 00:46:43.126721 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-xtables-lock\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.127221 kubelet[2958]: I0702 00:46:43.126767 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-kernel\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.127221 kubelet[2958]: I0702 00:46:43.126809 2958 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cni-path\") pod \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\" (UID: \"7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f\") " Jul 2 00:46:43.127221 kubelet[2958]: I0702 00:46:43.126899 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cni-path" (OuterVolumeSpecName: "cni-path") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.127221 kubelet[2958]: I0702 00:46:43.126950 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.127221 kubelet[2958]: I0702 00:46:43.126988 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.133872 kubelet[2958]: I0702 00:46:43.133796 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hostproc" (OuterVolumeSpecName: "hostproc") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.136393 kubelet[2958]: I0702 00:46:43.136341 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.138327 kubelet[2958]: I0702 00:46:43.137744 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.138327 kubelet[2958]: I0702 00:46:43.137829 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.138327 kubelet[2958]: I0702 00:46:43.137902 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.138327 kubelet[2958]: I0702 00:46:43.137945 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.138327 kubelet[2958]: I0702 00:46:43.137994 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:46:43.152198 kubelet[2958]: I0702 00:46:43.150393 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-kube-api-access-tntzl" (OuterVolumeSpecName: "kube-api-access-tntzl") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "kube-api-access-tntzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:43.152198 kubelet[2958]: I0702 00:46:43.150942 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:46:43.159225 kubelet[2958]: I0702 00:46:43.159086 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:43.160842 kubelet[2958]: I0702 00:46:43.160760 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:46:43.164729 kubelet[2958]: I0702 00:46:43.164644 2958 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" (UID: "7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:46:43.224487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df-rootfs.mount: Deactivated successfully. Jul 2 00:46:43.225350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df-shm.mount: Deactivated successfully. Jul 2 00:46:43.225778 systemd[1]: var-lib-kubelet-pods-7cb4c2c9\x2d91a5\x2d4fcd\x2d98d9\x2d08d9e4dba26f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtntzl.mount: Deactivated successfully. Jul 2 00:46:43.226158 systemd[1]: var-lib-kubelet-pods-7cb4c2c9\x2d91a5\x2d4fcd\x2d98d9\x2d08d9e4dba26f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:43.226622 systemd[1]: var-lib-kubelet-pods-7cb4c2c9\x2d91a5\x2d4fcd\x2d98d9\x2d08d9e4dba26f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:46:43.227102 kubelet[2958]: I0702 00:46:43.227069 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-run\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.227291 kubelet[2958]: I0702 00:46:43.227270 2958 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-net\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.227471 kubelet[2958]: I0702 00:46:43.227449 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-config-path\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.227640 kubelet[2958]: I0702 00:46:43.227621 2958 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-clustermesh-secrets\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.227808 kubelet[2958]: I0702 00:46:43.227787 2958 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tntzl\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-kube-api-access-tntzl\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.227956 kubelet[2958]: I0702 00:46:43.227936 2958 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-etc-cni-netd\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.228101 kubelet[2958]: I0702 00:46:43.228082 2958 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hubble-tls\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.228257 kubelet[2958]: I0702 00:46:43.228236 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-ipsec-secrets\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.228357 systemd[1]: var-lib-kubelet-pods-7cb4c2c9\x2d91a5\x2d4fcd\x2d98d9\x2d08d9e4dba26f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:46:43.228600 kubelet[2958]: I0702 00:46:43.228577 2958 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-lib-modules\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.228738 kubelet[2958]: I0702 00:46:43.228718 2958 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-hostproc\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.229535 kubelet[2958]: I0702 00:46:43.229491 2958 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-host-proc-sys-kernel\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.231041 kubelet[2958]: I0702 00:46:43.231007 2958 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cilium-cgroup\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.232174 kubelet[2958]: I0702 00:46:43.232141 2958 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-bpf-maps\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.233180 kubelet[2958]: I0702 00:46:43.232904 2958 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-xtables-lock\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.233180 kubelet[2958]: I0702 00:46:43.232952 2958 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f-cni-path\") on node \"ip-172-31-19-36\" DevicePath \"\"" Jul 2 00:46:43.909304 kubelet[2958]: I0702 00:46:43.909262 2958 scope.go:117] "RemoveContainer" containerID="98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df" Jul 2 00:46:43.914504 env[1854]: time="2024-07-02T00:46:43.914051191Z" level=info msg="RemoveContainer for \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\"" Jul 2 00:46:43.919459 env[1854]: time="2024-07-02T00:46:43.919381797Z" level=info msg="RemoveContainer for \"98776073536716d7983ad8b6813e7fe47b45df5d35e02598eef1edb4d35ec8df\" returns successfully" Jul 2 00:46:43.970095 kubelet[2958]: I0702 00:46:43.970011 2958 topology_manager.go:215] "Topology Admit Handler" podUID="cb053315-ecd3-40c7-a8d2-85a9620b8bf6" podNamespace="kube-system" podName="cilium-jcngl" Jul 2 00:46:43.970300 kubelet[2958]: E0702 00:46:43.970165 2958 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" containerName="mount-cgroup" Jul 2 00:46:43.970300 kubelet[2958]: I0702 00:46:43.970245 2958 memory_manager.go:346] "RemoveStaleState removing state" podUID="7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" containerName="mount-cgroup" Jul 2 00:46:44.037863 kubelet[2958]: I0702 00:46:44.037764 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-hubble-tls\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.037863 kubelet[2958]: I0702 00:46:44.037859 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-etc-cni-netd\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.037925 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-xtables-lock\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.038003 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x44fz\" (UniqueName: \"kubernetes.io/projected/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-kube-api-access-x44fz\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.038062 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-cilium-run\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.038108 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-hostproc\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.038154 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-cilium-ipsec-secrets\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038219 kubelet[2958]: I0702 00:46:44.038201 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-cilium-config-path\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038249 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-lib-modules\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038297 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-clustermesh-secrets\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038370 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-host-proc-sys-kernel\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038476 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-bpf-maps\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038527 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-cilium-cgroup\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.038795 kubelet[2958]: I0702 00:46:44.038570 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-cni-path\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.039264 kubelet[2958]: I0702 00:46:44.038614 2958 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb053315-ecd3-40c7-a8d2-85a9620b8bf6-host-proc-sys-net\") pod \"cilium-jcngl\" (UID: \"cb053315-ecd3-40c7-a8d2-85a9620b8bf6\") " pod="kube-system/cilium-jcngl" Jul 2 00:46:44.291363 env[1854]: time="2024-07-02T00:46:44.290641142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcngl,Uid:cb053315-ecd3-40c7-a8d2-85a9620b8bf6,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:44.322163 env[1854]: time="2024-07-02T00:46:44.321995534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:44.322526 env[1854]: time="2024-07-02T00:46:44.322126520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:44.322526 env[1854]: time="2024-07-02T00:46:44.322154698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:44.322903 env[1854]: time="2024-07-02T00:46:44.322843279Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1 pid=4887 runtime=io.containerd.runc.v2 Jul 2 00:46:44.406470 env[1854]: time="2024-07-02T00:46:44.406414509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcngl,Uid:cb053315-ecd3-40c7-a8d2-85a9620b8bf6,Namespace:kube-system,Attempt:0,} returns sandbox id \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\"" Jul 2 00:46:44.412782 env[1854]: time="2024-07-02T00:46:44.411344789Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:46:44.439485 env[1854]: time="2024-07-02T00:46:44.439379162Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2616c9783a65349d7cad7743a1e6b197d758fc91a92e87fe04f43c214b5ff9ba\"" Jul 2 00:46:44.440514 env[1854]: time="2024-07-02T00:46:44.440461470Z" level=info msg="StartContainer for \"2616c9783a65349d7cad7743a1e6b197d758fc91a92e87fe04f43c214b5ff9ba\"" Jul 2 00:46:44.539704 env[1854]: time="2024-07-02T00:46:44.539637456Z" level=info msg="StartContainer for \"2616c9783a65349d7cad7743a1e6b197d758fc91a92e87fe04f43c214b5ff9ba\" returns successfully" Jul 2 00:46:44.629169 env[1854]: time="2024-07-02T00:46:44.629106968Z" level=info msg="shim disconnected" id=2616c9783a65349d7cad7743a1e6b197d758fc91a92e87fe04f43c214b5ff9ba Jul 2 00:46:44.629566 env[1854]: time="2024-07-02T00:46:44.629527780Z" level=warning msg="cleaning up after shim disconnected" id=2616c9783a65349d7cad7743a1e6b197d758fc91a92e87fe04f43c214b5ff9ba namespace=k8s.io Jul 2 00:46:44.629708 env[1854]: time="2024-07-02T00:46:44.629679624Z" level=info msg="cleaning up dead shim" Jul 2 00:46:44.671294 env[1854]: time="2024-07-02T00:46:44.671227112Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4970 runtime=io.containerd.runc.v2\n" Jul 2 00:46:44.682683 kubelet[2958]: E0702 00:46:44.682634 2958 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:46:44.918966 env[1854]: time="2024-07-02T00:46:44.918812566Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:46:44.945534 env[1854]: time="2024-07-02T00:46:44.945442872Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76e71a1794533e3cdfb119ed8b98096b11e52284076307bb539104eefa10c896\"" Jul 2 00:46:44.946838 env[1854]: time="2024-07-02T00:46:44.946759539Z" level=info msg="StartContainer for \"76e71a1794533e3cdfb119ed8b98096b11e52284076307bb539104eefa10c896\"" Jul 2 00:46:45.044725 env[1854]: time="2024-07-02T00:46:45.044650332Z" level=info msg="StartContainer for \"76e71a1794533e3cdfb119ed8b98096b11e52284076307bb539104eefa10c896\" returns successfully" Jul 2 00:46:45.091898 env[1854]: time="2024-07-02T00:46:45.091807355Z" level=info msg="shim disconnected" id=76e71a1794533e3cdfb119ed8b98096b11e52284076307bb539104eefa10c896 Jul 2 00:46:45.092172 env[1854]: time="2024-07-02T00:46:45.091952982Z" level=warning msg="cleaning up after shim disconnected" id=76e71a1794533e3cdfb119ed8b98096b11e52284076307bb539104eefa10c896 namespace=k8s.io Jul 2 00:46:45.092172 env[1854]: time="2024-07-02T00:46:45.091988636Z" level=info msg="cleaning up dead shim" Jul 2 00:46:45.106175 env[1854]: time="2024-07-02T00:46:45.106095962Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5033 runtime=io.containerd.runc.v2\n" Jul 2 00:46:45.426481 kubelet[2958]: I0702 00:46:45.426386 2958 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f" path="/var/lib/kubelet/pods/7cb4c2c9-91a5-4fcd-98d9-08d9e4dba26f/volumes" Jul 2 00:46:45.928815 env[1854]: time="2024-07-02T00:46:45.928369334Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:46:45.966044 env[1854]: time="2024-07-02T00:46:45.965952477Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4\"" Jul 2 00:46:45.967329 env[1854]: time="2024-07-02T00:46:45.967271844Z" level=info msg="StartContainer for \"a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4\"" Jul 2 00:46:46.096557 env[1854]: time="2024-07-02T00:46:46.096484308Z" level=info msg="StartContainer for \"a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4\" returns successfully" Jul 2 00:46:46.130071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4-rootfs.mount: Deactivated successfully. Jul 2 00:46:46.137455 env[1854]: time="2024-07-02T00:46:46.137364247Z" level=info msg="shim disconnected" id=a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4 Jul 2 00:46:46.137784 env[1854]: time="2024-07-02T00:46:46.137751925Z" level=warning msg="cleaning up after shim disconnected" id=a2c5aa27f4092872c9255f5eefbd8ac07ad8541fe76fbb8b9af7fd4adcc868d4 namespace=k8s.io Jul 2 00:46:46.137946 env[1854]: time="2024-07-02T00:46:46.137916861Z" level=info msg="cleaning up dead shim" Jul 2 00:46:46.152731 env[1854]: time="2024-07-02T00:46:46.152672042Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5093 runtime=io.containerd.runc.v2\n" Jul 2 00:46:46.939918 env[1854]: time="2024-07-02T00:46:46.938858914Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:46:46.983145 env[1854]: time="2024-07-02T00:46:46.982990601Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb\"" Jul 2 00:46:46.985958 env[1854]: time="2024-07-02T00:46:46.984245466Z" level=info msg="StartContainer for \"99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb\"" Jul 2 00:46:47.084545 env[1854]: time="2024-07-02T00:46:47.084455267Z" level=info msg="StartContainer for \"99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb\" returns successfully" Jul 2 00:46:47.124181 env[1854]: time="2024-07-02T00:46:47.124099417Z" level=info msg="shim disconnected" id=99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb Jul 2 00:46:47.124181 env[1854]: time="2024-07-02T00:46:47.124175861Z" level=warning msg="cleaning up after shim disconnected" id=99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb namespace=k8s.io Jul 2 00:46:47.124681 env[1854]: time="2024-07-02T00:46:47.124199730Z" level=info msg="cleaning up dead shim" Jul 2 00:46:47.139263 env[1854]: time="2024-07-02T00:46:47.139188313Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5146 runtime=io.containerd.runc.v2\n" Jul 2 00:46:47.422599 kubelet[2958]: E0702 00:46:47.422229 2958 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmwds" podUID="6e8a7475-f64e-4979-9008-f91d8aa496fc" Jul 2 00:46:47.952008 env[1854]: time="2024-07-02T00:46:47.951799985Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:46:47.956509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99d547be741532df313521f6df091c4cb22d71555fc6d996af091a21b7b3cafb-rootfs.mount: Deactivated successfully. Jul 2 00:46:47.985276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401428123.mount: Deactivated successfully. Jul 2 00:46:48.005928 env[1854]: time="2024-07-02T00:46:48.005856572Z" level=info msg="CreateContainer within sandbox \"284aee7d01ad8d7bebe53721459fca4ffa350ad7a7e25df1296275cc639e7fc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18\"" Jul 2 00:46:48.007312 env[1854]: time="2024-07-02T00:46:48.007249216Z" level=info msg="StartContainer for \"4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18\"" Jul 2 00:46:48.127442 env[1854]: time="2024-07-02T00:46:48.126532974Z" level=info msg="StartContainer for \"4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18\" returns successfully" Jul 2 00:46:48.899670 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:46:49.421370 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.crDB2F.mount: Deactivated successfully. Jul 2 00:46:49.425964 kubelet[2958]: E0702 00:46:49.425903 2958 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-5dd5756b68-hmwds" podUID="6e8a7475-f64e-4979-9008-f91d8aa496fc" Jul 2 00:46:51.688121 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.17zXhi.mount: Deactivated successfully. Jul 2 00:46:52.945375 systemd-networkd[1518]: lxc_health: Link UP Jul 2 00:46:52.956447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:46:52.956811 systemd-networkd[1518]: lxc_health: Gained carrier Jul 2 00:46:52.960389 (udev-worker)[5705]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:46:54.013870 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.98VkId.mount: Deactivated successfully. Jul 2 00:46:54.321990 kubelet[2958]: I0702 00:46:54.321842 2958 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-jcngl" podStartSLOduration=11.321787121 podCreationTimestamp="2024-07-02 00:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:48.978174973 +0000 UTC m=+130.012017023" watchObservedRunningTime="2024-07-02 00:46:54.321787121 +0000 UTC m=+135.355629171" Jul 2 00:46:54.581134 systemd-networkd[1518]: lxc_health: Gained IPv6LL Jul 2 00:46:56.348497 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.UCdkSH.mount: Deactivated successfully. Jul 2 00:46:58.727120 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.4AC0Ue.mount: Deactivated successfully. Jul 2 00:47:01.151944 systemd[1]: run-containerd-runc-k8s.io-4b55ce3648e13065df16a0dd02f167f5699980d84141b9b486d53a498c4b1c18-runc.CKMoMw.mount: Deactivated successfully. Jul 2 00:47:01.316826 sshd[4791]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:01.323153 systemd-logind[1838]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:47:01.326246 systemd[1]: sshd@26-172.31.19.36:22-139.178.89.65:34308.service: Deactivated successfully. Jul 2 00:47:01.327966 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:47:01.330997 systemd-logind[1838]: Removed session 27. Jul 2 00:47:39.426818 env[1854]: time="2024-07-02T00:47:39.426731017Z" level=info msg="StopPodSandbox for \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\"" Jul 2 00:47:39.427679 env[1854]: time="2024-07-02T00:47:39.426892490Z" level=info msg="TearDown network for sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" successfully" Jul 2 00:47:39.427679 env[1854]: time="2024-07-02T00:47:39.426951405Z" level=info msg="StopPodSandbox for \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" returns successfully" Jul 2 00:47:39.429253 env[1854]: time="2024-07-02T00:47:39.429014274Z" level=info msg="RemovePodSandbox for \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\"" Jul 2 00:47:39.429749 env[1854]: time="2024-07-02T00:47:39.429110603Z" level=info msg="Forcibly stopping sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\"" Jul 2 00:47:39.429749 env[1854]: time="2024-07-02T00:47:39.429693917Z" level=info msg="TearDown network for sandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" successfully" Jul 2 00:47:39.436087 env[1854]: time="2024-07-02T00:47:39.436028407Z" level=info msg="RemovePodSandbox \"800e58de2007c4e00cd81175954412e0064a3c927aa1f2e0392c4dda9cb985df\" returns successfully" Jul 2 00:47:47.111515 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e-rootfs.mount: Deactivated successfully. Jul 2 00:47:47.121017 env[1854]: time="2024-07-02T00:47:47.120926309Z" level=info msg="shim disconnected" id=cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e Jul 2 00:47:47.121017 env[1854]: time="2024-07-02T00:47:47.120999997Z" level=warning msg="cleaning up after shim disconnected" id=cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e namespace=k8s.io Jul 2 00:47:47.121796 env[1854]: time="2024-07-02T00:47:47.121025015Z" level=info msg="cleaning up dead shim" Jul 2 00:47:47.137297 env[1854]: time="2024-07-02T00:47:47.137224043Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:47:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5844 runtime=io.containerd.runc.v2\n" Jul 2 00:47:48.124321 kubelet[2958]: I0702 00:47:48.124286 2958 scope.go:117] "RemoveContainer" containerID="cacf382201c8b93719399682859c401bc3d776a08afe267f574a37e89382c13e" Jul 2 00:47:48.130806 env[1854]: time="2024-07-02T00:47:48.130750422Z" level=info msg="CreateContainer within sandbox \"b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:47:48.161458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902797269.mount: Deactivated successfully. Jul 2 00:47:48.175596 env[1854]: time="2024-07-02T00:47:48.175490533Z" level=info msg="CreateContainer within sandbox \"b1292ff693c3fa3931deabb175ab936932bc6b66759124e52fda5e6a2a802eaa\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"65d64d1d1afca482b1cc9938dd6938f122a0eafe727de162cbe62009e1c01089\"" Jul 2 00:47:48.176696 env[1854]: time="2024-07-02T00:47:48.176637679Z" level=info msg="StartContainer for \"65d64d1d1afca482b1cc9938dd6938f122a0eafe727de162cbe62009e1c01089\"" Jul 2 00:47:48.312553 env[1854]: time="2024-07-02T00:47:48.312485899Z" level=info msg="StartContainer for \"65d64d1d1afca482b1cc9938dd6938f122a0eafe727de162cbe62009e1c01089\" returns successfully" Jul 2 00:47:49.148986 systemd[1]: run-containerd-runc-k8s.io-65d64d1d1afca482b1cc9938dd6938f122a0eafe727de162cbe62009e1c01089-runc.ZAxuuE.mount: Deactivated successfully. Jul 2 00:47:53.016811 kubelet[2958]: E0702 00:47:53.016768 2958 controller.go:193] "Failed to update lease" err="Put \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:47:53.162829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc-rootfs.mount: Deactivated successfully. Jul 2 00:47:53.179038 env[1854]: time="2024-07-02T00:47:53.178975193Z" level=info msg="shim disconnected" id=12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc Jul 2 00:47:53.179856 env[1854]: time="2024-07-02T00:47:53.179817167Z" level=warning msg="cleaning up after shim disconnected" id=12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc namespace=k8s.io Jul 2 00:47:53.179982 env[1854]: time="2024-07-02T00:47:53.179954380Z" level=info msg="cleaning up dead shim" Jul 2 00:47:53.194327 env[1854]: time="2024-07-02T00:47:53.194270317Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:47:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5907 runtime=io.containerd.runc.v2\n" Jul 2 00:47:54.144999 kubelet[2958]: I0702 00:47:54.144961 2958 scope.go:117] "RemoveContainer" containerID="12865c08b60a2a00a4904722b7383289fe4980399a23017922d588f18f9321bc" Jul 2 00:47:54.149349 env[1854]: time="2024-07-02T00:47:54.149287347Z" level=info msg="CreateContainer within sandbox \"c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:47:54.172997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1261263050.mount: Deactivated successfully. Jul 2 00:47:54.188572 env[1854]: time="2024-07-02T00:47:54.188511378Z" level=info msg="CreateContainer within sandbox \"c2d6357645e664b97feed9465e28022ce6d709c994787c91cc3c15e8a8a1066c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"37c8c6dad2ff1ebea3fb1064e50aa63b929a1ffb69aaa27b538f2e4049a6b724\"" Jul 2 00:47:54.189988 env[1854]: time="2024-07-02T00:47:54.189940233Z" level=info msg="StartContainer for \"37c8c6dad2ff1ebea3fb1064e50aa63b929a1ffb69aaa27b538f2e4049a6b724\"" Jul 2 00:47:54.328669 env[1854]: time="2024-07-02T00:47:54.328595353Z" level=info msg="StartContainer for \"37c8c6dad2ff1ebea3fb1064e50aa63b929a1ffb69aaa27b538f2e4049a6b724\" returns successfully" Jul 2 00:48:03.018124 kubelet[2958]: E0702 00:48:03.018063 2958 controller.go:193] "Failed to update lease" err="Put \"https://172.31.19.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-36?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"