Jun 25 18:20:23.212303 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 25 18:20:23.212351 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Jun 25 17:19:03 -00 2024 Jun 25 18:20:23.212377 kernel: KASLR disabled due to lack of seed Jun 25 18:20:23.212394 kernel: efi: EFI v2.7 by EDK II Jun 25 18:20:23.212410 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Jun 25 18:20:23.212426 kernel: ACPI: Early table checksum verification disabled Jun 25 18:20:23.212443 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 25 18:20:23.212459 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 25 18:20:23.212475 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 18:20:23.212491 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 25 18:20:23.212512 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 18:20:23.212528 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 25 18:20:23.212543 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 25 18:20:23.212560 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 25 18:20:23.212578 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 18:20:23.212613 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 25 18:20:23.212637 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 25 18:20:23.212654 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 25 18:20:23.212671 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 25 18:20:23.212688 kernel: printk: bootconsole [uart0] enabled Jun 25 18:20:23.212705 kernel: NUMA: Failed to initialise from firmware Jun 25 18:20:23.212722 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 18:20:23.212739 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jun 25 18:20:23.212755 kernel: Zone ranges: Jun 25 18:20:23.212772 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 25 18:20:23.212789 kernel: DMA32 empty Jun 25 18:20:23.212811 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 25 18:20:23.212828 kernel: Movable zone start for each node Jun 25 18:20:23.212844 kernel: Early memory node ranges Jun 25 18:20:23.212861 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 25 18:20:23.212877 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 25 18:20:23.212893 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 25 18:20:23.212910 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 25 18:20:23.212926 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 25 18:20:23.212943 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 25 18:20:23.212959 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 25 18:20:23.212976 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 25 18:20:23.212992 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 18:20:23.213013 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 25 18:20:23.213030 kernel: psci: probing for conduit method from ACPI. Jun 25 18:20:23.213054 kernel: psci: PSCIv1.0 detected in firmware. Jun 25 18:20:23.213072 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 18:20:23.213089 kernel: psci: Trusted OS migration not required Jun 25 18:20:23.213111 kernel: psci: SMC Calling Convention v1.1 Jun 25 18:20:23.213129 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jun 25 18:20:23.213177 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jun 25 18:20:23.213200 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 18:20:23.213218 kernel: Detected PIPT I-cache on CPU0 Jun 25 18:20:23.213236 kernel: CPU features: detected: GIC system register CPU interface Jun 25 18:20:23.213253 kernel: CPU features: detected: Spectre-v2 Jun 25 18:20:23.213271 kernel: CPU features: detected: Spectre-v3a Jun 25 18:20:23.213289 kernel: CPU features: detected: Spectre-BHB Jun 25 18:20:23.213306 kernel: CPU features: detected: ARM erratum 1742098 Jun 25 18:20:23.213324 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 25 18:20:23.213347 kernel: alternatives: applying boot alternatives Jun 25 18:20:23.213367 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:20:23.213386 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:20:23.213404 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 18:20:23.213422 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:20:23.213440 kernel: Fallback order for Node 0: 0 Jun 25 18:20:23.213457 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jun 25 18:20:23.213474 kernel: Policy zone: Normal Jun 25 18:20:23.213492 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:20:23.213509 kernel: software IO TLB: area num 2. Jun 25 18:20:23.213527 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 25 18:20:23.213550 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Jun 25 18:20:23.213568 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:20:23.213585 kernel: trace event string verifier disabled Jun 25 18:20:23.213603 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:20:23.213622 kernel: rcu: RCU event tracing is enabled. Jun 25 18:20:23.213640 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:20:23.213659 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:20:23.213676 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:20:23.213694 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:20:23.213712 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:20:23.213730 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 18:20:23.213752 kernel: GICv3: 96 SPIs implemented Jun 25 18:20:23.213770 kernel: GICv3: 0 Extended SPIs implemented Jun 25 18:20:23.213787 kernel: Root IRQ handler: gic_handle_irq Jun 25 18:20:23.213804 kernel: GICv3: GICv3 features: 16 PPIs Jun 25 18:20:23.213822 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 25 18:20:23.213839 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 25 18:20:23.213857 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 18:20:23.213875 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Jun 25 18:20:23.213892 kernel: GICv3: using LPI property table @0x00000004000e0000 Jun 25 18:20:23.213910 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 25 18:20:23.213928 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Jun 25 18:20:23.213945 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:20:23.213967 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 25 18:20:23.213985 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 25 18:20:23.214003 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 25 18:20:23.214021 kernel: Console: colour dummy device 80x25 Jun 25 18:20:23.214039 kernel: printk: console [tty1] enabled Jun 25 18:20:23.214057 kernel: ACPI: Core revision 20230628 Jun 25 18:20:23.214075 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 25 18:20:23.214093 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:20:23.214111 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:20:23.214133 kernel: SELinux: Initializing. Jun 25 18:20:23.217200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:20:23.217232 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 18:20:23.217252 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:20:23.217271 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:20:23.217289 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:20:23.217308 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:20:23.217327 kernel: Platform MSI: ITS@0x10080000 domain created Jun 25 18:20:23.217345 kernel: PCI/MSI: ITS@0x10080000 domain created Jun 25 18:20:23.217372 kernel: Remapping and enabling EFI services. Jun 25 18:20:23.217391 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:20:23.217409 kernel: Detected PIPT I-cache on CPU1 Jun 25 18:20:23.217427 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 25 18:20:23.217445 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Jun 25 18:20:23.217463 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 25 18:20:23.217481 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:20:23.217499 kernel: SMP: Total of 2 processors activated. Jun 25 18:20:23.217530 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 18:20:23.217552 kernel: CPU features: detected: 32-bit EL1 Support Jun 25 18:20:23.217577 kernel: CPU features: detected: CRC32 instructions Jun 25 18:20:23.217596 kernel: CPU: All CPU(s) started at EL1 Jun 25 18:20:23.217625 kernel: alternatives: applying system-wide alternatives Jun 25 18:20:23.217649 kernel: devtmpfs: initialized Jun 25 18:20:23.217668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:20:23.217687 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:20:23.217708 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:20:23.217727 kernel: SMBIOS 3.0.0 present. Jun 25 18:20:23.217746 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 25 18:20:23.217769 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:20:23.217789 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 18:20:23.217808 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 18:20:23.217827 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 18:20:23.217846 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:20:23.217865 kernel: audit: type=2000 audit(0.304:1): state=initialized audit_enabled=0 res=1 Jun 25 18:20:23.217884 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:20:23.217907 kernel: cpuidle: using governor menu Jun 25 18:20:23.217926 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 18:20:23.217945 kernel: ASID allocator initialised with 65536 entries Jun 25 18:20:23.217964 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:20:23.217983 kernel: Serial: AMBA PL011 UART driver Jun 25 18:20:23.218002 kernel: Modules: 17600 pages in range for non-PLT usage Jun 25 18:20:23.218021 kernel: Modules: 509120 pages in range for PLT usage Jun 25 18:20:23.218040 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:20:23.218059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:20:23.218083 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 18:20:23.218102 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 18:20:23.218122 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:20:23.218140 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:20:23.218181 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 18:20:23.218236 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 18:20:23.218256 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:20:23.218275 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:20:23.218294 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:20:23.218321 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:20:23.218342 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 18:20:23.220291 kernel: ACPI: Interpreter enabled Jun 25 18:20:23.220315 kernel: ACPI: Using GIC for interrupt routing Jun 25 18:20:23.220334 kernel: ACPI: MCFG table detected, 1 entries Jun 25 18:20:23.220354 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 25 18:20:23.220699 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:20:23.220920 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 18:20:23.221132 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 18:20:23.223497 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 25 18:20:23.223723 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 25 18:20:23.223751 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 25 18:20:23.223771 kernel: acpiphp: Slot [1] registered Jun 25 18:20:23.223792 kernel: acpiphp: Slot [2] registered Jun 25 18:20:23.223811 kernel: acpiphp: Slot [3] registered Jun 25 18:20:23.223830 kernel: acpiphp: Slot [4] registered Jun 25 18:20:23.223859 kernel: acpiphp: Slot [5] registered Jun 25 18:20:23.223878 kernel: acpiphp: Slot [6] registered Jun 25 18:20:23.223898 kernel: acpiphp: Slot [7] registered Jun 25 18:20:23.223916 kernel: acpiphp: Slot [8] registered Jun 25 18:20:23.223936 kernel: acpiphp: Slot [9] registered Jun 25 18:20:23.223954 kernel: acpiphp: Slot [10] registered Jun 25 18:20:23.223973 kernel: acpiphp: Slot [11] registered Jun 25 18:20:23.223991 kernel: acpiphp: Slot [12] registered Jun 25 18:20:23.224010 kernel: acpiphp: Slot [13] registered Jun 25 18:20:23.224029 kernel: acpiphp: Slot [14] registered Jun 25 18:20:23.224053 kernel: acpiphp: Slot [15] registered Jun 25 18:20:23.224072 kernel: acpiphp: Slot [16] registered Jun 25 18:20:23.224091 kernel: acpiphp: Slot [17] registered Jun 25 18:20:23.224111 kernel: acpiphp: Slot [18] registered Jun 25 18:20:23.224130 kernel: acpiphp: Slot [19] registered Jun 25 18:20:23.224190 kernel: acpiphp: Slot [20] registered Jun 25 18:20:23.224214 kernel: acpiphp: Slot [21] registered Jun 25 18:20:23.224233 kernel: acpiphp: Slot [22] registered Jun 25 18:20:23.224252 kernel: acpiphp: Slot [23] registered Jun 25 18:20:23.224278 kernel: acpiphp: Slot [24] registered Jun 25 18:20:23.224298 kernel: acpiphp: Slot [25] registered Jun 25 18:20:23.224317 kernel: acpiphp: Slot [26] registered Jun 25 18:20:23.224336 kernel: acpiphp: Slot [27] registered Jun 25 18:20:23.224355 kernel: acpiphp: Slot [28] registered Jun 25 18:20:23.224373 kernel: acpiphp: Slot [29] registered Jun 25 18:20:23.224392 kernel: acpiphp: Slot [30] registered Jun 25 18:20:23.224411 kernel: acpiphp: Slot [31] registered Jun 25 18:20:23.224430 kernel: PCI host bridge to bus 0000:00 Jun 25 18:20:23.224700 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 25 18:20:23.224907 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 18:20:23.225109 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 25 18:20:23.227441 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 25 18:20:23.227709 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jun 25 18:20:23.227945 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jun 25 18:20:23.228232 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jun 25 18:20:23.228492 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 18:20:23.228733 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jun 25 18:20:23.228942 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 18:20:23.231219 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 18:20:23.231480 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jun 25 18:20:23.231693 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jun 25 18:20:23.231911 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jun 25 18:20:23.232120 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 18:20:23.232366 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jun 25 18:20:23.232582 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jun 25 18:20:23.232843 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jun 25 18:20:23.233086 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jun 25 18:20:23.238115 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jun 25 18:20:23.238369 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 25 18:20:23.238607 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 18:20:23.238794 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 25 18:20:23.238821 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 18:20:23.238841 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 18:20:23.238861 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 18:20:23.238880 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 18:20:23.238899 kernel: iommu: Default domain type: Translated Jun 25 18:20:23.238918 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 18:20:23.238944 kernel: efivars: Registered efivars operations Jun 25 18:20:23.238963 kernel: vgaarb: loaded Jun 25 18:20:23.238982 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 18:20:23.239001 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:20:23.239020 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:20:23.239039 kernel: pnp: PnP ACPI init Jun 25 18:20:23.239292 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 25 18:20:23.239324 kernel: pnp: PnP ACPI: found 1 devices Jun 25 18:20:23.239351 kernel: NET: Registered PF_INET protocol family Jun 25 18:20:23.239372 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 18:20:23.239391 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 18:20:23.239410 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:20:23.239430 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:20:23.239450 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 18:20:23.239469 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 18:20:23.239488 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:20:23.239507 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 18:20:23.239531 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:20:23.239551 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:20:23.239570 kernel: kvm [1]: HYP mode not available Jun 25 18:20:23.239589 kernel: Initialise system trusted keyrings Jun 25 18:20:23.239608 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 18:20:23.239627 kernel: Key type asymmetric registered Jun 25 18:20:23.239645 kernel: Asymmetric key parser 'x509' registered Jun 25 18:20:23.239664 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 25 18:20:23.239683 kernel: io scheduler mq-deadline registered Jun 25 18:20:23.239707 kernel: io scheduler kyber registered Jun 25 18:20:23.239726 kernel: io scheduler bfq registered Jun 25 18:20:23.239950 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 25 18:20:23.239981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 18:20:23.240000 kernel: ACPI: button: Power Button [PWRB] Jun 25 18:20:23.240019 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 25 18:20:23.240039 kernel: ACPI: button: Sleep Button [SLPB] Jun 25 18:20:23.240057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:20:23.240083 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 25 18:20:23.242035 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 25 18:20:23.242072 kernel: printk: console [ttyS0] disabled Jun 25 18:20:23.242093 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 25 18:20:23.242112 kernel: printk: console [ttyS0] enabled Jun 25 18:20:23.242131 kernel: printk: bootconsole [uart0] disabled Jun 25 18:20:23.242176 kernel: thunder_xcv, ver 1.0 Jun 25 18:20:23.242198 kernel: thunder_bgx, ver 1.0 Jun 25 18:20:23.242218 kernel: nicpf, ver 1.0 Jun 25 18:20:23.242245 kernel: nicvf, ver 1.0 Jun 25 18:20:23.242478 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 18:20:23.247485 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T18:20:22 UTC (1719339622) Jun 25 18:20:23.247531 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 18:20:23.247551 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jun 25 18:20:23.247570 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jun 25 18:20:23.247589 kernel: watchdog: Hard watchdog permanently disabled Jun 25 18:20:23.247608 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:20:23.247637 kernel: Segment Routing with IPv6 Jun 25 18:20:23.247657 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:20:23.247675 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:20:23.247694 kernel: Key type dns_resolver registered Jun 25 18:20:23.247713 kernel: registered taskstats version 1 Jun 25 18:20:23.247732 kernel: Loading compiled-in X.509 certificates Jun 25 18:20:23.247751 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 751918e575d02f96b0daadd44b8f442a8c39ecd3' Jun 25 18:20:23.247769 kernel: Key type .fscrypt registered Jun 25 18:20:23.247787 kernel: Key type fscrypt-provisioning registered Jun 25 18:20:23.247806 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:20:23.247829 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:20:23.247848 kernel: ima: No architecture policies found Jun 25 18:20:23.247867 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 18:20:23.247885 kernel: clk: Disabling unused clocks Jun 25 18:20:23.247904 kernel: Freeing unused kernel memory: 39040K Jun 25 18:20:23.247922 kernel: Run /init as init process Jun 25 18:20:23.247941 kernel: with arguments: Jun 25 18:20:23.247959 kernel: /init Jun 25 18:20:23.247977 kernel: with environment: Jun 25 18:20:23.248001 kernel: HOME=/ Jun 25 18:20:23.248020 kernel: TERM=linux Jun 25 18:20:23.248038 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:20:23.248061 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:20:23.248084 systemd[1]: Detected virtualization amazon. Jun 25 18:20:23.248105 systemd[1]: Detected architecture arm64. Jun 25 18:20:23.248125 systemd[1]: Running in initrd. Jun 25 18:20:23.248227 systemd[1]: No hostname configured, using default hostname. Jun 25 18:20:23.248302 systemd[1]: Hostname set to . Jun 25 18:20:23.248325 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:20:23.248347 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:20:23.248368 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:20:23.248389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:20:23.248411 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:20:23.248432 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:20:23.248460 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:20:23.248482 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:20:23.248506 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:20:23.248527 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:20:23.248548 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:20:23.248569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:20:23.248590 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:20:23.248633 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:20:23.248656 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:20:23.248677 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:20:23.248697 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:20:23.248718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:20:23.248739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:20:23.248759 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:20:23.248780 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:20:23.248800 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:20:23.248826 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:20:23.248846 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:20:23.248867 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:20:23.248887 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:20:23.248908 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:20:23.248928 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:20:23.248949 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:20:23.248969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:20:23.248994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:20:23.249015 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:20:23.249035 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:20:23.249055 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:20:23.249116 systemd-journald[250]: Collecting audit messages is disabled. Jun 25 18:20:23.250015 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:20:23.250040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:20:23.250062 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:20:23.250097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:20:23.250131 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:20:23.250173 kernel: Bridge firewalling registered Jun 25 18:20:23.250197 systemd-journald[250]: Journal started Jun 25 18:20:23.250237 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2b76347227575f4d1c427cb291aaf2) is 8.0M, max 75.3M, 67.3M free. Jun 25 18:20:23.197351 systemd-modules-load[251]: Inserted module 'overlay' Jun 25 18:20:23.246368 systemd-modules-load[251]: Inserted module 'br_netfilter' Jun 25 18:20:23.256479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:20:23.259166 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:20:23.260472 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:20:23.279551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:20:23.288427 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:20:23.310486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:20:23.326953 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:20:23.333637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:20:23.348623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:20:23.353087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:20:23.362429 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:20:23.405714 dracut-cmdline[290]: dracut-dracut-053 Jun 25 18:20:23.414372 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e6069a8408a0ca7e7bc40a0bde7fe3ef89df2f98c4bdd2e7e7f9f8f3f8ad207f Jun 25 18:20:23.441250 systemd-resolved[286]: Positive Trust Anchors: Jun 25 18:20:23.442351 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:20:23.442416 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:20:23.608190 kernel: SCSI subsystem initialized Jun 25 18:20:23.614195 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:20:23.628200 kernel: iscsi: registered transport (tcp) Jun 25 18:20:23.651264 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:20:23.651339 kernel: QLogic iSCSI HBA Driver Jun 25 18:20:23.679198 kernel: random: crng init done Jun 25 18:20:23.679502 systemd-resolved[286]: Defaulting to hostname 'linux'. Jun 25 18:20:23.682992 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:20:23.687319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:20:23.736699 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:20:23.746485 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:20:23.792543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:20:23.792670 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:20:23.794203 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:20:23.863229 kernel: raid6: neonx8 gen() 6615 MB/s Jun 25 18:20:23.880199 kernel: raid6: neonx4 gen() 6429 MB/s Jun 25 18:20:23.897194 kernel: raid6: neonx2 gen() 5352 MB/s Jun 25 18:20:23.914199 kernel: raid6: neonx1 gen() 3938 MB/s Jun 25 18:20:23.931180 kernel: raid6: int64x8 gen() 3777 MB/s Jun 25 18:20:23.948181 kernel: raid6: int64x4 gen() 3684 MB/s Jun 25 18:20:23.965181 kernel: raid6: int64x2 gen() 3558 MB/s Jun 25 18:20:23.982884 kernel: raid6: int64x1 gen() 2762 MB/s Jun 25 18:20:23.982919 kernel: raid6: using algorithm neonx8 gen() 6615 MB/s Jun 25 18:20:24.000862 kernel: raid6: .... xor() 4809 MB/s, rmw enabled Jun 25 18:20:24.000904 kernel: raid6: using neon recovery algorithm Jun 25 18:20:24.009186 kernel: xor: measuring software checksum speed Jun 25 18:20:24.010183 kernel: 8regs : 10814 MB/sec Jun 25 18:20:24.012177 kernel: 32regs : 11922 MB/sec Jun 25 18:20:24.014492 kernel: arm64_neon : 9565 MB/sec Jun 25 18:20:24.014527 kernel: xor: using function: 32regs (11922 MB/sec) Jun 25 18:20:24.103206 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:20:24.126320 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:20:24.136511 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:20:24.180988 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jun 25 18:20:24.191350 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:20:24.203725 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:20:24.260506 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jun 25 18:20:24.323896 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:20:24.335489 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:20:24.465284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:20:24.482608 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:20:24.519707 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:20:24.530687 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:20:24.534310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:20:24.536556 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:20:24.549734 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:20:24.601091 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:20:24.694497 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 18:20:24.694607 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 25 18:20:24.719430 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 18:20:24.719736 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 18:20:24.720004 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:33:ce:57:16:57 Jun 25 18:20:24.716496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:20:24.716766 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:20:24.720894 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:20:24.723988 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:20:24.724977 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:20:24.730504 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:20:24.732172 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:20:24.757518 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 25 18:20:24.757562 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 18:20:24.751911 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:20:24.767572 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 18:20:24.774678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:20:24.774748 kernel: GPT:9289727 != 16777215 Jun 25 18:20:24.774776 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:20:24.777374 kernel: GPT:9289727 != 16777215 Jun 25 18:20:24.777445 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:20:24.778641 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:20:24.791192 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:20:24.801564 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:20:24.847839 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:20:24.902343 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jun 25 18:20:24.912214 kernel: BTRFS: device fsid c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (523) Jun 25 18:20:24.923323 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 18:20:25.028483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:20:25.056928 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 18:20:25.070626 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 18:20:25.073105 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 18:20:25.098573 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:20:25.115779 disk-uuid[665]: Primary Header is updated. Jun 25 18:20:25.115779 disk-uuid[665]: Secondary Entries is updated. Jun 25 18:20:25.115779 disk-uuid[665]: Secondary Header is updated. Jun 25 18:20:25.127209 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:20:25.134629 kernel: GPT:disk_guids don't match. Jun 25 18:20:25.134691 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:20:25.134719 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:20:25.144194 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:20:26.149790 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:20:26.149865 disk-uuid[666]: The operation has completed successfully. Jun 25 18:20:26.334881 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:20:26.335095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:20:26.395449 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:20:26.411001 sh[1010]: Success Jun 25 18:20:26.439262 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 18:20:26.533021 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:20:26.546335 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:20:26.554974 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:20:26.586674 kernel: BTRFS info (device dm-0): first mount of filesystem c80091a6-4bf3-4ad3-8e1c-e6eb918765f9 Jun 25 18:20:26.586736 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:20:26.586764 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:20:26.588256 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:20:26.589418 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:20:26.691182 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 18:20:26.712411 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:20:26.716235 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:20:26.729416 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:20:26.736052 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:20:26.763243 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:20:26.763321 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:20:26.763349 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:20:26.771188 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:20:26.788207 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:20:26.788810 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:20:26.813234 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:20:26.823584 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:20:26.935238 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:20:26.956342 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:20:27.003572 systemd-networkd[1203]: lo: Link UP Jun 25 18:20:27.003595 systemd-networkd[1203]: lo: Gained carrier Jun 25 18:20:27.007657 systemd-networkd[1203]: Enumeration completed Jun 25 18:20:27.007851 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:20:27.010022 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:20:27.010031 systemd-networkd[1203]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:20:27.010833 systemd[1]: Reached target network.target - Network. Jun 25 18:20:27.016734 systemd-networkd[1203]: eth0: Link UP Jun 25 18:20:27.016744 systemd-networkd[1203]: eth0: Gained carrier Jun 25 18:20:27.016765 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:20:27.039308 systemd-networkd[1203]: eth0: DHCPv4 address 172.31.30.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:20:27.176705 ignition[1123]: Ignition 2.19.0 Jun 25 18:20:27.176737 ignition[1123]: Stage: fetch-offline Jun 25 18:20:27.178333 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:27.178373 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:27.181877 ignition[1123]: Ignition finished successfully Jun 25 18:20:27.187806 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:20:27.197500 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:20:27.232690 ignition[1212]: Ignition 2.19.0 Jun 25 18:20:27.232725 ignition[1212]: Stage: fetch Jun 25 18:20:27.233933 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:27.233960 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:27.234122 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:27.242343 ignition[1212]: PUT result: OK Jun 25 18:20:27.245680 ignition[1212]: parsed url from cmdline: "" Jun 25 18:20:27.245703 ignition[1212]: no config URL provided Jun 25 18:20:27.245721 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:20:27.245750 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:20:27.245799 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:27.247585 ignition[1212]: PUT result: OK Jun 25 18:20:27.247687 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 18:20:27.248905 ignition[1212]: GET result: OK Jun 25 18:20:27.249139 ignition[1212]: parsing config with SHA512: e9c5f337106f62644981dcac2f57bcd622055d470371b39ed304fa248f16e3c9553e10f70433e6acc8e23a24e182d6ca61b0d41246d8fc456534af4c790afd4f Jun 25 18:20:27.266651 unknown[1212]: fetched base config from "system" Jun 25 18:20:27.266676 unknown[1212]: fetched base config from "system" Jun 25 18:20:27.268020 ignition[1212]: fetch: fetch complete Jun 25 18:20:27.266691 unknown[1212]: fetched user config from "aws" Jun 25 18:20:27.268034 ignition[1212]: fetch: fetch passed Jun 25 18:20:27.274942 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:20:27.268139 ignition[1212]: Ignition finished successfully Jun 25 18:20:27.294463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:20:27.320495 ignition[1219]: Ignition 2.19.0 Jun 25 18:20:27.320528 ignition[1219]: Stage: kargs Jun 25 18:20:27.321775 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:27.321805 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:27.321972 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:27.330179 ignition[1219]: PUT result: OK Jun 25 18:20:27.335528 ignition[1219]: kargs: kargs passed Jun 25 18:20:27.337076 ignition[1219]: Ignition finished successfully Jun 25 18:20:27.341347 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:20:27.356620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:20:27.381984 ignition[1226]: Ignition 2.19.0 Jun 25 18:20:27.382612 ignition[1226]: Stage: disks Jun 25 18:20:27.383410 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:27.383441 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:27.383592 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:27.388019 ignition[1226]: PUT result: OK Jun 25 18:20:27.396110 ignition[1226]: disks: disks passed Jun 25 18:20:27.396369 ignition[1226]: Ignition finished successfully Jun 25 18:20:27.398861 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:20:27.405099 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:20:27.408366 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:20:27.412465 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:20:27.414476 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:20:27.421954 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:20:27.435426 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:20:27.480025 systemd-fsck[1236]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:20:27.490540 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:20:27.498605 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:20:27.599199 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 91548e21-ce72-437e-94b9-d3fed380163a r/w with ordered data mode. Quota mode: none. Jun 25 18:20:27.600472 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:20:27.604786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:20:27.623402 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:20:27.636698 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:20:27.638545 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:20:27.639076 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:20:27.639658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:20:27.657905 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:20:27.670189 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1255) Jun 25 18:20:27.674917 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:20:27.682287 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:20:27.682334 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:20:27.682361 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:20:27.689213 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:20:27.693442 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:20:28.037420 initrd-setup-root[1279]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:20:28.054488 initrd-setup-root[1286]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:20:28.063001 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:20:28.071929 initrd-setup-root[1300]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:20:28.322383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:20:28.332362 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:20:28.346647 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:20:28.363094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:20:28.365266 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:20:28.365440 systemd-networkd[1203]: eth0: Gained IPv6LL Jun 25 18:20:28.391711 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:20:28.414585 ignition[1368]: INFO : Ignition 2.19.0 Jun 25 18:20:28.417177 ignition[1368]: INFO : Stage: mount Jun 25 18:20:28.417177 ignition[1368]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:28.417177 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:28.417177 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:28.424574 ignition[1368]: INFO : PUT result: OK Jun 25 18:20:28.429354 ignition[1368]: INFO : mount: mount passed Jun 25 18:20:28.429354 ignition[1368]: INFO : Ignition finished successfully Jun 25 18:20:28.433860 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:20:28.447258 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:20:28.607625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:20:28.638194 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1380) Jun 25 18:20:28.638261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0ee4f8d8-9b37-4f6c-84aa-681a87076704 Jun 25 18:20:28.641629 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 18:20:28.641696 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:20:28.647184 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:20:28.650172 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:20:28.682777 ignition[1397]: INFO : Ignition 2.19.0 Jun 25 18:20:28.682777 ignition[1397]: INFO : Stage: files Jun 25 18:20:28.685954 ignition[1397]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:28.685954 ignition[1397]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:28.685954 ignition[1397]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:28.692325 ignition[1397]: INFO : PUT result: OK Jun 25 18:20:28.696861 ignition[1397]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:20:28.699802 ignition[1397]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:20:28.702286 ignition[1397]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:20:28.717618 ignition[1397]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:20:28.720372 ignition[1397]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:20:28.722728 ignition[1397]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:20:28.721512 unknown[1397]: wrote ssh authorized keys file for user: core Jun 25 18:20:28.727427 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:20:28.727427 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:20:28.727427 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:20:28.727427 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 18:20:28.790508 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:20:28.895290 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 18:20:28.895290 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:20:28.902271 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 18:20:29.400928 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:20:29.787802 ignition[1397]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 18:20:29.787802 ignition[1397]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:20:29.794504 ignition[1397]: INFO : files: files passed Jun 25 18:20:29.794504 ignition[1397]: INFO : Ignition finished successfully Jun 25 18:20:29.800298 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:20:29.832615 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:20:29.842449 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:20:29.845108 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:20:29.845343 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:20:29.884813 initrd-setup-root-after-ignition[1426]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:20:29.884813 initrd-setup-root-after-ignition[1426]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:20:29.893134 initrd-setup-root-after-ignition[1430]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:20:29.899280 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:20:29.902072 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:20:29.915467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:20:29.974880 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:20:29.976963 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:20:29.982417 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:20:29.986642 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:20:29.989758 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:20:30.003568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:20:30.029958 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:20:30.041547 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:20:30.073867 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:20:30.077007 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:20:30.083566 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:20:30.085462 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:20:30.085706 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:20:30.093806 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:20:30.095908 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:20:30.098626 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:20:30.100998 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:20:30.111115 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:20:30.113848 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:20:30.116915 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:20:30.120783 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:20:30.123658 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:20:30.126796 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:20:30.129085 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:20:30.129799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:20:30.141560 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:20:30.145511 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:20:30.149894 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:20:30.150856 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:20:30.154712 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:20:30.154952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:20:30.158808 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:20:30.159065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:20:30.162183 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:20:30.162528 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:20:30.189778 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:20:30.199834 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:20:30.205963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:20:30.206347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:20:30.214923 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:20:30.215229 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:20:30.245690 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:20:30.248169 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:20:30.251524 ignition[1450]: INFO : Ignition 2.19.0 Jun 25 18:20:30.256523 ignition[1450]: INFO : Stage: umount Jun 25 18:20:30.258986 ignition[1450]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:20:30.261251 ignition[1450]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:20:30.263559 ignition[1450]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:20:30.267284 ignition[1450]: INFO : PUT result: OK Jun 25 18:20:30.273559 ignition[1450]: INFO : umount: umount passed Jun 25 18:20:30.275624 ignition[1450]: INFO : Ignition finished successfully Jun 25 18:20:30.280475 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:20:30.280790 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:20:30.286897 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:20:30.287724 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:20:30.287853 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:20:30.293321 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:20:30.293428 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:20:30.306184 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:20:30.306299 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:20:30.308309 systemd[1]: Stopped target network.target - Network. Jun 25 18:20:30.311646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:20:30.311782 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:20:30.316242 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:20:30.327926 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:20:30.330941 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:20:30.338469 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:20:30.340618 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:20:30.344169 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:20:30.344264 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:20:30.346393 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:20:30.346468 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:20:30.352909 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:20:30.353035 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:20:30.355014 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:20:30.355113 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:20:30.357723 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:20:30.363523 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:20:30.370794 systemd-networkd[1203]: eth0: DHCPv6 lease lost Jun 25 18:20:30.382053 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:20:30.383398 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:20:30.389735 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:20:30.391943 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:20:30.399711 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:20:30.399843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:20:30.415387 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:20:30.419428 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:20:30.419576 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:20:30.422454 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:20:30.422571 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:20:30.425318 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:20:30.425430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:20:30.427553 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:20:30.427654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:20:30.432054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:20:30.483040 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:20:30.487278 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:20:30.494094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:20:30.494262 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:20:30.496428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:20:30.496513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:20:30.498770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:20:30.498863 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:20:30.501645 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:20:30.501746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:20:30.511742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:20:30.511889 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:20:30.529654 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:20:30.537327 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:20:30.537469 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:20:30.540081 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:20:30.540251 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:20:30.556961 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:20:30.557091 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:20:30.563276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:20:30.563445 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:20:30.580131 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:20:30.582238 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:20:30.602040 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:20:30.604264 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:20:30.615888 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:20:30.618387 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:20:30.623504 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:20:30.625463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:20:30.625588 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:20:30.642654 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:20:30.689761 systemd[1]: Switching root. Jun 25 18:20:30.730612 systemd-journald[250]: Journal stopped Jun 25 18:20:33.261459 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jun 25 18:20:33.261600 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:20:33.261660 kernel: SELinux: policy capability open_perms=1 Jun 25 18:20:33.261697 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:20:33.261740 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:20:33.261771 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:20:33.261805 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:20:33.261839 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:20:33.261876 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:20:33.261905 kernel: audit: type=1403 audit(1719339631.497:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:20:33.261947 systemd[1]: Successfully loaded SELinux policy in 67.412ms. Jun 25 18:20:33.261992 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.660ms. Jun 25 18:20:33.262028 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:20:33.262069 systemd[1]: Detected virtualization amazon. Jun 25 18:20:33.262102 systemd[1]: Detected architecture arm64. Jun 25 18:20:33.262133 systemd[1]: Detected first boot. Jun 25 18:20:33.265273 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:20:33.265340 zram_generator::config[1510]: No configuration found. Jun 25 18:20:33.265383 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:20:33.265422 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:20:33.265456 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 18:20:33.265506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:20:33.265551 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:20:33.265584 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:20:33.265620 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:20:33.265655 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:20:33.265690 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:20:33.265730 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:20:33.265771 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:20:33.265810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:20:33.265847 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:20:33.265891 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:20:33.265925 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:20:33.265959 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:20:33.265992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:20:33.266026 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:20:33.266060 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:20:33.266094 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:20:33.266132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:20:33.266209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:20:33.266252 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:20:33.266286 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:20:33.266320 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:20:33.266355 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:20:33.266388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:20:33.266419 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:20:33.266453 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:20:33.266498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:20:33.266533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:20:33.266564 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:20:33.266597 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:20:33.266629 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:20:33.266664 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:20:33.266697 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:20:33.266730 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:20:33.266767 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:20:33.266808 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:20:33.266841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:20:33.266873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:20:33.266904 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:20:33.266937 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:20:33.266972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:20:33.267005 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:20:33.267035 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:20:33.267073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:20:33.267109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:20:33.267141 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 18:20:33.274285 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 25 18:20:33.274322 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:20:33.274360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:20:33.274391 kernel: fuse: init (API version 7.39) Jun 25 18:20:33.274423 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:20:33.274462 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:20:33.274497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:20:33.274529 kernel: ACPI: bus type drm_connector registered Jun 25 18:20:33.274557 kernel: loop: module loaded Jun 25 18:20:33.274589 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:20:33.274619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:20:33.274651 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:20:33.274694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:20:33.274726 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:20:33.274761 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:20:33.274796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:20:33.274826 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:20:33.274857 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:20:33.274890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:20:33.274982 systemd-journald[1613]: Collecting audit messages is disabled. Jun 25 18:20:33.275045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:20:33.275085 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:20:33.275118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:20:33.275195 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:20:33.275238 systemd-journald[1613]: Journal started Jun 25 18:20:33.275292 systemd-journald[1613]: Runtime Journal (/run/log/journal/ec2b76347227575f4d1c427cb291aaf2) is 8.0M, max 75.3M, 67.3M free. Jun 25 18:20:33.279316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:20:33.286260 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:20:33.292376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:20:33.292796 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:20:33.295560 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:20:33.296662 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:20:33.299721 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:20:33.303095 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:20:33.306414 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:20:33.319842 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:20:33.346465 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:20:33.357527 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:20:33.371357 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:20:33.374563 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:20:33.385542 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:20:33.401867 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:20:33.408384 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:20:33.422618 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:20:33.424773 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:20:33.435521 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:20:33.447592 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:20:33.458934 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:20:33.461557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:20:33.492981 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:20:33.495717 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:20:33.504372 systemd-journald[1613]: Time spent on flushing to /var/log/journal/ec2b76347227575f4d1c427cb291aaf2 is 61.655ms for 900 entries. Jun 25 18:20:33.504372 systemd-journald[1613]: System Journal (/var/log/journal/ec2b76347227575f4d1c427cb291aaf2) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:20:33.583365 systemd-journald[1613]: Received client request to flush runtime journal. Jun 25 18:20:33.578789 systemd-tmpfiles[1661]: ACLs are not supported, ignoring. Jun 25 18:20:33.578815 systemd-tmpfiles[1661]: ACLs are not supported, ignoring. Jun 25 18:20:33.594096 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:20:33.597288 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:20:33.622625 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:20:33.628588 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:20:33.632220 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:20:33.644533 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:20:33.670670 udevadm[1677]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 18:20:33.729976 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:20:33.748520 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:20:33.793430 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jun 25 18:20:33.793478 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jun 25 18:20:33.806097 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:20:34.592008 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:20:34.606540 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:20:34.670234 systemd-udevd[1691]: Using default interface naming scheme 'v255'. Jun 25 18:20:34.720233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:20:34.736275 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:20:34.774404 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:20:34.903661 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1701) Jun 25 18:20:34.903261 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 18:20:34.924657 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:20:34.935588 (udev-worker)[1707]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:20:35.138367 systemd-networkd[1695]: lo: Link UP Jun 25 18:20:35.139567 systemd-networkd[1695]: lo: Gained carrier Jun 25 18:20:35.144250 systemd-networkd[1695]: Enumeration completed Jun 25 18:20:35.144693 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:20:35.148116 systemd-networkd[1695]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:20:35.148126 systemd-networkd[1695]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:20:35.154480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:20:35.155309 systemd-networkd[1695]: eth0: Link UP Jun 25 18:20:35.156849 systemd-networkd[1695]: eth0: Gained carrier Jun 25 18:20:35.157027 systemd-networkd[1695]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:20:35.167657 systemd-networkd[1695]: eth0: DHCPv4 address 172.31.30.218/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:20:35.231227 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1704) Jun 25 18:20:35.289108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:20:35.497275 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:20:35.500507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:20:35.531037 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:20:35.539473 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:20:35.584427 lvm[1820]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:20:35.626090 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:20:35.629698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:20:35.641512 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:20:35.654209 lvm[1823]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:20:35.694395 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:20:35.697137 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:20:35.699592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:20:35.699643 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:20:35.701673 systemd[1]: Reached target machines.target - Containers. Jun 25 18:20:35.709385 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:20:35.719421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:20:35.724533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:20:35.726734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:20:35.730955 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:20:35.747487 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:20:35.759381 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:20:35.766860 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:20:35.799221 kernel: loop0: detected capacity change from 0 to 193208 Jun 25 18:20:35.802469 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:20:35.809471 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:20:35.832208 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:20:35.852759 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:20:35.854461 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:20:35.872201 kernel: loop1: detected capacity change from 0 to 59688 Jun 25 18:20:35.976194 kernel: loop2: detected capacity change from 0 to 113712 Jun 25 18:20:36.053224 kernel: loop3: detected capacity change from 0 to 51896 Jun 25 18:20:36.165349 kernel: loop4: detected capacity change from 0 to 193208 Jun 25 18:20:36.183205 kernel: loop5: detected capacity change from 0 to 59688 Jun 25 18:20:36.196195 kernel: loop6: detected capacity change from 0 to 113712 Jun 25 18:20:36.211211 kernel: loop7: detected capacity change from 0 to 51896 Jun 25 18:20:36.222753 (sd-merge)[1844]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 18:20:36.223794 (sd-merge)[1844]: Merged extensions into '/usr'. Jun 25 18:20:36.232688 systemd[1]: Reloading requested from client PID 1831 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:20:36.232725 systemd[1]: Reloading... Jun 25 18:20:36.378252 zram_generator::config[1873]: No configuration found. Jun 25 18:20:36.620363 systemd-networkd[1695]: eth0: Gained IPv6LL Jun 25 18:20:36.663730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:20:36.812446 systemd[1]: Reloading finished in 578 ms. Jun 25 18:20:36.843436 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:20:36.846638 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:20:36.865836 systemd[1]: Starting ensure-sysext.service... Jun 25 18:20:36.871464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:20:36.890881 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:20:36.890917 systemd[1]: Reloading... Jun 25 18:20:36.953387 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:20:36.954108 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:20:36.958203 systemd-tmpfiles[1930]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:20:36.959033 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Jun 25 18:20:36.959344 systemd-tmpfiles[1930]: ACLs are not supported, ignoring. Jun 25 18:20:36.967826 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:20:36.968076 systemd-tmpfiles[1930]: Skipping /boot Jun 25 18:20:37.004265 ldconfig[1827]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:20:37.016726 systemd-tmpfiles[1930]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:20:37.017621 systemd-tmpfiles[1930]: Skipping /boot Jun 25 18:20:37.054190 zram_generator::config[1960]: No configuration found. Jun 25 18:20:37.332699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:20:37.481255 systemd[1]: Reloading finished in 589 ms. Jun 25 18:20:37.512941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:20:37.526461 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:20:37.546471 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:20:37.569645 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:20:37.580287 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:20:37.602507 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:20:37.618698 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:20:37.647739 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:20:37.652383 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:20:37.662036 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:20:37.682671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:20:37.684923 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:20:37.715726 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:20:37.730373 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:20:37.730939 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:20:37.760355 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:20:37.763809 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:20:37.769833 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:20:37.771199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:20:37.774514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:20:37.774910 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:20:37.779409 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:20:37.783351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:20:37.790650 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:20:37.821974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:20:37.829295 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:20:37.832394 augenrules[2055]: No rules Jun 25 18:20:37.848874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:20:37.858097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:20:37.877721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:20:37.880573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:20:37.880985 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:20:37.885550 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:20:37.889383 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:20:37.894811 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:20:37.899596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:20:37.907723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:20:37.911715 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:20:37.912119 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:20:37.926873 systemd[1]: Finished ensure-sysext.service. Jun 25 18:20:37.938618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:20:37.939037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:20:37.942303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:20:37.942741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:20:37.954700 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:20:37.954939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:20:38.003074 systemd-resolved[2029]: Positive Trust Anchors: Jun 25 18:20:38.003124 systemd-resolved[2029]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:20:38.003215 systemd-resolved[2029]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:20:38.012273 systemd-resolved[2029]: Defaulting to hostname 'linux'. Jun 25 18:20:38.015952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:20:38.018430 systemd[1]: Reached target network.target - Network. Jun 25 18:20:38.020429 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:20:38.022533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:20:38.024793 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:20:38.026999 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:20:38.029406 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:20:38.032088 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:20:38.034512 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:20:38.037598 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:20:38.040057 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:20:38.040123 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:20:38.041913 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:20:38.045942 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:20:38.051219 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:20:38.056067 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:20:38.061506 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:20:38.063710 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:20:38.065875 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:20:38.068294 systemd[1]: System is tainted: cgroupsv1 Jun 25 18:20:38.068402 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:20:38.068479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:20:38.077562 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:20:38.088112 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:20:38.103497 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:20:38.120314 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:20:38.128564 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:20:38.131396 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:20:38.149470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:20:38.159188 jq[2089]: false Jun 25 18:20:38.173250 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:20:38.196672 systemd[1]: Started ntpd.service - Network Time Service. Jun 25 18:20:38.212524 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:20:38.235639 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:20:38.257219 extend-filesystems[2090]: Found loop4 Jun 25 18:20:38.257219 extend-filesystems[2090]: Found loop5 Jun 25 18:20:38.257219 extend-filesystems[2090]: Found loop6 Jun 25 18:20:38.257219 extend-filesystems[2090]: Found loop7 Jun 25 18:20:38.257219 extend-filesystems[2090]: Found nvme0n1 Jun 25 18:20:38.253581 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p1 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p2 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p3 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found usr Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p4 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p6 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p7 Jun 25 18:20:38.286672 extend-filesystems[2090]: Found nvme0n1p9 Jun 25 18:20:38.286672 extend-filesystems[2090]: Checking size of /dev/nvme0n1p9 Jun 25 18:20:38.289484 dbus-daemon[2088]: [system] SELinux support is enabled Jun 25 18:20:38.297021 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:20:38.316966 dbus-daemon[2088]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1695 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 18:20:38.322435 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:20:38.351491 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:20:38.357347 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:20:38.363907 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:20:38.391711 ntpd[2097]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:48:48 UTC 2024 (1): Starting Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:48:48 UTC 2024 (1): Starting Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: ---------------------------------------------------- Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: corporation. Support and training for ntp-4 are Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: available at https://www.nwtime.org/support Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: ---------------------------------------------------- Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: proto: precision = 0.108 usec (-23) Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: basedate set to 2024-06-13 Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: gps base set to 2024-06-16 (week 2319) Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:20:38.411294 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen normally on 3 eth0 172.31.30.218:123 Jun 25 18:20:38.391768 ntpd[2097]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:20:38.412911 extend-filesystems[2090]: Resized partition /dev/nvme0n1p9 Jun 25 18:20:38.391789 ntpd[2097]: ---------------------------------------------------- Jun 25 18:20:38.420710 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen normally on 4 lo [::1]:123 Jun 25 18:20:38.420710 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listen normally on 5 eth0 [fe80::433:ceff:fe57:1657%2]:123 Jun 25 18:20:38.420710 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: Listening on routing socket on fd #22 for interface updates Jun 25 18:20:38.414762 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:20:38.391808 ntpd[2097]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:20:38.391827 ntpd[2097]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:20:38.391846 ntpd[2097]: corporation. Support and training for ntp-4 are Jun 25 18:20:38.391865 ntpd[2097]: available at https://www.nwtime.org/support Jun 25 18:20:38.391884 ntpd[2097]: ---------------------------------------------------- Jun 25 18:20:38.399469 ntpd[2097]: proto: precision = 0.108 usec (-23) Jun 25 18:20:38.400734 ntpd[2097]: basedate set to 2024-06-13 Jun 25 18:20:38.400765 ntpd[2097]: gps base set to 2024-06-16 (week 2319) Jun 25 18:20:38.407654 ntpd[2097]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:20:38.407747 ntpd[2097]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:20:38.408118 ntpd[2097]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:20:38.408238 ntpd[2097]: Listen normally on 3 eth0 172.31.30.218:123 Jun 25 18:20:38.408313 ntpd[2097]: Listen normally on 4 lo [::1]:123 Jun 25 18:20:38.416030 ntpd[2097]: Listen normally on 5 eth0 [fe80::433:ceff:fe57:1657%2]:123 Jun 25 18:20:38.416127 ntpd[2097]: Listening on routing socket on fd #22 for interface updates Jun 25 18:20:38.423750 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:20:38.436123 extend-filesystems[2124]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:20:38.445508 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:20:38.445508 ntpd[2097]: 25 Jun 18:20:38 ntpd[2097]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:20:38.444280 ntpd[2097]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:20:38.444338 ntpd[2097]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:20:38.463373 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 18:20:38.453102 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:20:38.453707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:20:38.459678 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:20:38.460254 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:20:38.508321 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:20:38.508916 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:20:38.534214 update_engine[2118]: I0625 18:20:38.532932 2118 main.cc:92] Flatcar Update Engine starting Jun 25 18:20:38.539701 update_engine[2118]: I0625 18:20:38.535728 2118 update_check_scheduler.cc:74] Next update check in 7m21s Jun 25 18:20:38.566861 (ntainerd)[2137]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:20:38.593824 coreos-metadata[2086]: Jun 25 18:20:38.586 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:20:38.567896 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:20:38.616978 coreos-metadata[2086]: Jun 25 18:20:38.607 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 18:20:38.617045 jq[2126]: true Jun 25 18:20:38.628142 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.625 INFO Fetch successful Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.625 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.629 INFO Fetch successful Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.643 INFO Fetch successful Jun 25 18:20:38.647783 coreos-metadata[2086]: Jun 25 18:20:38.643 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 18:20:38.660233 coreos-metadata[2086]: Jun 25 18:20:38.651 INFO Fetch successful Jun 25 18:20:38.660233 coreos-metadata[2086]: Jun 25 18:20:38.652 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 18:20:38.666186 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 18:20:38.672723 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:20:38.683711 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:20:38.683796 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:20:38.718081 jq[2154]: true Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.691 INFO Fetch failed with 404: resource not found Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.693 INFO Fetch successful Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.693 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.699 INFO Fetch successful Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.699 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.702 INFO Fetch successful Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.702 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.704 INFO Fetch successful Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.704 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 18:20:38.718483 coreos-metadata[2086]: Jun 25 18:20:38.715 INFO Fetch successful Jun 25 18:20:38.719020 tar[2133]: linux-arm64/helm Jun 25 18:20:38.729040 extend-filesystems[2124]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 18:20:38.729040 extend-filesystems[2124]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:20:38.729040 extend-filesystems[2124]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 18:20:38.741613 extend-filesystems[2090]: Resized filesystem in /dev/nvme0n1p9 Jun 25 18:20:38.763461 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 18:20:38.765408 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:20:38.765452 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:20:38.785249 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:20:38.787796 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:20:38.798954 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:20:38.799508 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:20:38.814179 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (2168) Jun 25 18:20:38.814311 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 18:20:38.871346 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 18:20:38.989325 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:20:38.992031 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:20:39.046101 bash[2225]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:20:39.055319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: Initializing new seelog logger Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: New Seelog Logger Creation Complete Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 processing appconfig overrides Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 processing appconfig overrides Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.131786 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 processing appconfig overrides Jun 25 18:20:39.168421 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO Proxy environment variables: Jun 25 18:20:39.168421 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.168421 amazon-ssm-agent[2193]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:20:39.168421 amazon-ssm-agent[2193]: 2024/06/25 18:20:39 processing appconfig overrides Jun 25 18:20:39.163835 systemd[1]: Starting sshkeys.service... Jun 25 18:20:39.175772 systemd-logind[2117]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 18:20:39.175808 systemd-logind[2117]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 25 18:20:39.178273 systemd-logind[2117]: New seat seat0. Jun 25 18:20:39.182068 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:20:39.246732 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:20:39.250647 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO https_proxy: Jun 25 18:20:39.336312 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:20:39.352464 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO http_proxy: Jun 25 18:20:39.402263 locksmithd[2179]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:20:39.453480 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO no_proxy: Jun 25 18:20:39.484512 containerd[2137]: time="2024-06-25T18:20:39.484343040Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:20:39.556656 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO Checking if agent identity type OnPrem can be assumed Jun 25 18:20:39.670189 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO Checking if agent identity type EC2 can be assumed Jun 25 18:20:39.764711 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO Agent will take identity from EC2 Jun 25 18:20:39.801137 containerd[2137]: time="2024-06-25T18:20:39.795922537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:20:39.814736 containerd[2137]: time="2024-06-25T18:20:39.814359194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.827237114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.827374874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.827787350Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.827821046Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.827978390Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.828089486Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:20:39.829176 containerd[2137]: time="2024-06-25T18:20:39.828116126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.832636 containerd[2137]: time="2024-06-25T18:20:39.832549166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.841206 containerd[2137]: time="2024-06-25T18:20:39.837884678Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.841206 containerd[2137]: time="2024-06-25T18:20:39.837949022Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:20:39.841206 containerd[2137]: time="2024-06-25T18:20:39.837976898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:20:39.841206 containerd[2137]: time="2024-06-25T18:20:39.840889622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:20:39.841206 containerd[2137]: time="2024-06-25T18:20:39.840935378Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:20:39.846412 containerd[2137]: time="2024-06-25T18:20:39.846306842Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:20:39.846412 containerd[2137]: time="2024-06-25T18:20:39.846390950Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861464390Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861546578Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861581198Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861654110Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861695618Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861721802Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.861751046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862045718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862089974Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862121366Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862180406Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862222730Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862262114Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.865100 containerd[2137]: time="2024-06-25T18:20:39.862293242Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.865842 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:20:39.865905 coreos-metadata[2274]: Jun 25 18:20:39.863 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862322426Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862356722Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862392842Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862422038Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862448798Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:20:39.868048 containerd[2137]: time="2024-06-25T18:20:39.862701086Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:20:39.872605 containerd[2137]: time="2024-06-25T18:20:39.868939106Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:20:39.872605 containerd[2137]: time="2024-06-25T18:20:39.869029862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.872605 containerd[2137]: time="2024-06-25T18:20:39.869067986Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:20:39.872605 containerd[2137]: time="2024-06-25T18:20:39.869117654Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:20:39.872840 coreos-metadata[2274]: Jun 25 18:20:39.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 18:20:39.872840 coreos-metadata[2274]: Jun 25 18:20:39.872 INFO Fetch successful Jun 25 18:20:39.872840 coreos-metadata[2274]: Jun 25 18:20:39.872 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:20:39.879085 coreos-metadata[2274]: Jun 25 18:20:39.873 INFO Fetch successful Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.873659474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.873721346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875523854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875576378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875607350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875640158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875668802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875700698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879259 containerd[2137]: time="2024-06-25T18:20:39.875733278Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:20:39.879713 containerd[2137]: time="2024-06-25T18:20:39.879547310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879713 containerd[2137]: time="2024-06-25T18:20:39.879603386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879713 containerd[2137]: time="2024-06-25T18:20:39.879644870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879713 containerd[2137]: time="2024-06-25T18:20:39.879679226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879713 containerd[2137]: time="2024-06-25T18:20:39.879709178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879948 containerd[2137]: time="2024-06-25T18:20:39.879742358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879948 containerd[2137]: time="2024-06-25T18:20:39.879773606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.879948 containerd[2137]: time="2024-06-25T18:20:39.879801302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:20:39.886556 containerd[2137]: time="2024-06-25T18:20:39.883346582Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:20:39.886556 containerd[2137]: time="2024-06-25T18:20:39.883490390Z" level=info msg="Connect containerd service" Jun 25 18:20:39.886556 containerd[2137]: time="2024-06-25T18:20:39.883563758Z" level=info msg="using legacy CRI server" Jun 25 18:20:39.886556 containerd[2137]: time="2024-06-25T18:20:39.883581974Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:20:39.886556 containerd[2137]: time="2024-06-25T18:20:39.883732382Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:20:39.890759 unknown[2274]: wrote ssh authorized keys file for user: core Jun 25 18:20:39.894236 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.892777658Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.892876970Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.892917170Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.892943822Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.892973258Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893432450Z" level=info msg="Start subscribing containerd event" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893530058Z" level=info msg="Start recovering state" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893661446Z" level=info msg="Start event monitor" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893685806Z" level=info msg="Start snapshots syncer" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893707922Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893727254Z" level=info msg="Start streaming server" Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893488874Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:20:39.904298 containerd[2137]: time="2024-06-25T18:20:39.893967158Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:20:39.910455 containerd[2137]: time="2024-06-25T18:20:39.904980878Z" level=info msg="containerd successfully booted in 0.446239s" Jun 25 18:20:39.963578 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:20:39.993228 update-ssh-keys[2325]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:20:39.995629 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:20:40.021110 systemd[1]: Finished sshkeys.service. Jun 25 18:20:40.062577 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 18:20:40.062851 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 18:20:40.071043 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:20:40.067930 dbus-daemon[2088]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2173 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 18:20:40.080622 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 18:20:40.154441 polkitd[2337]: Started polkitd version 121 Jun 25 18:20:40.168283 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 18:20:40.183433 polkitd[2337]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 18:20:40.183580 polkitd[2337]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 18:20:40.186196 polkitd[2337]: Finished loading, compiling and executing 2 rules Jun 25 18:20:40.188510 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 18:20:40.190578 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 18:20:40.193782 polkitd[2337]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 18:20:40.234451 systemd-resolved[2029]: System hostname changed to 'ip-172-31-30-218'. Jun 25 18:20:40.234468 systemd-hostnamed[2173]: Hostname set to (transient) Jun 25 18:20:40.276171 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 25 18:20:40.371712 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 18:20:40.473209 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 18:20:40.573538 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [Registrar] Starting registrar module Jun 25 18:20:40.677359 amazon-ssm-agent[2193]: 2024-06-25 18:20:39 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 18:20:40.843354 amazon-ssm-agent[2193]: 2024-06-25 18:20:40 INFO [EC2Identity] EC2 registration was successful. Jun 25 18:20:40.881715 amazon-ssm-agent[2193]: 2024-06-25 18:20:40 INFO [CredentialRefresher] credentialRefresher has started Jun 25 18:20:40.881715 amazon-ssm-agent[2193]: 2024-06-25 18:20:40 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 18:20:40.881872 amazon-ssm-agent[2193]: 2024-06-25 18:20:40 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 18:20:40.943383 amazon-ssm-agent[2193]: 2024-06-25 18:20:40 INFO [CredentialRefresher] Next credential rotation will be in 32.04165914756667 minutes Jun 25 18:20:41.100495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:20:41.112767 tar[2133]: linux-arm64/LICENSE Jun 25 18:20:41.112767 tar[2133]: linux-arm64/README.md Jun 25 18:20:41.113974 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:20:41.155128 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:20:41.385409 sshd_keygen[2130]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:20:41.431352 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:20:41.445731 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:20:41.476217 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:20:41.476822 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:20:41.491943 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:20:41.517121 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:20:41.528717 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:20:41.539608 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:20:41.542051 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:20:41.545094 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:20:41.547722 systemd[1]: Startup finished in 9.948s (kernel) + 10.115s (userspace) = 20.063s. Jun 25 18:20:41.910019 amazon-ssm-agent[2193]: 2024-06-25 18:20:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 18:20:41.971075 kubelet[2355]: E0625 18:20:41.970896 2355 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:20:41.981496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:20:41.981917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:20:42.010411 amazon-ssm-agent[2193]: 2024-06-25 18:20:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2391) started Jun 25 18:20:42.111085 amazon-ssm-agent[2193]: 2024-06-25 18:20:41 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 18:20:45.549862 systemd-resolved[2029]: Clock change detected. Flushing caches. Jun 25 18:20:45.621626 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:20:45.627959 systemd[1]: Started sshd@0-172.31.30.218:22-139.178.89.65:41216.service - OpenSSH per-connection server daemon (139.178.89.65:41216). Jun 25 18:20:45.815599 sshd[2404]: Accepted publickey for core from 139.178.89.65 port 41216 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:45.818608 sshd[2404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:45.833850 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:20:45.838881 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:20:45.844431 systemd-logind[2117]: New session 1 of user core. Jun 25 18:20:45.869697 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:20:45.887101 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:20:45.893165 (systemd)[2410]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:46.106295 systemd[2410]: Queued start job for default target default.target. Jun 25 18:20:46.107013 systemd[2410]: Created slice app.slice - User Application Slice. Jun 25 18:20:46.107067 systemd[2410]: Reached target paths.target - Paths. Jun 25 18:20:46.107099 systemd[2410]: Reached target timers.target - Timers. Jun 25 18:20:46.116652 systemd[2410]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:20:46.131777 systemd[2410]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:20:46.132078 systemd[2410]: Reached target sockets.target - Sockets. Jun 25 18:20:46.132216 systemd[2410]: Reached target basic.target - Basic System. Jun 25 18:20:46.132403 systemd[2410]: Reached target default.target - Main User Target. Jun 25 18:20:46.132632 systemd[2410]: Startup finished in 227ms. Jun 25 18:20:46.133016 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:20:46.142159 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:20:46.293022 systemd[1]: Started sshd@1-172.31.30.218:22-139.178.89.65:52790.service - OpenSSH per-connection server daemon (139.178.89.65:52790). Jun 25 18:20:46.471267 sshd[2422]: Accepted publickey for core from 139.178.89.65 port 52790 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:46.473815 sshd[2422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:46.481930 systemd-logind[2117]: New session 2 of user core. Jun 25 18:20:46.491143 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:20:46.620841 sshd[2422]: pam_unix(sshd:session): session closed for user core Jun 25 18:20:46.628737 systemd[1]: sshd@1-172.31.30.218:22-139.178.89.65:52790.service: Deactivated successfully. Jun 25 18:20:46.631566 systemd-logind[2117]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:20:46.634448 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:20:46.637230 systemd-logind[2117]: Removed session 2. Jun 25 18:20:46.651986 systemd[1]: Started sshd@2-172.31.30.218:22-139.178.89.65:52798.service - OpenSSH per-connection server daemon (139.178.89.65:52798). Jun 25 18:20:46.828671 sshd[2430]: Accepted publickey for core from 139.178.89.65 port 52798 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:46.832102 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:46.839918 systemd-logind[2117]: New session 3 of user core. Jun 25 18:20:46.853153 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:20:46.976406 sshd[2430]: pam_unix(sshd:session): session closed for user core Jun 25 18:20:46.984097 systemd[1]: sshd@2-172.31.30.218:22-139.178.89.65:52798.service: Deactivated successfully. Jun 25 18:20:46.989144 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:20:46.991411 systemd-logind[2117]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:20:46.993577 systemd-logind[2117]: Removed session 3. Jun 25 18:20:47.009011 systemd[1]: Started sshd@3-172.31.30.218:22-139.178.89.65:52808.service - OpenSSH per-connection server daemon (139.178.89.65:52808). Jun 25 18:20:47.183430 sshd[2438]: Accepted publickey for core from 139.178.89.65 port 52808 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:47.185401 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:47.195083 systemd-logind[2117]: New session 4 of user core. Jun 25 18:20:47.202177 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:20:47.334784 sshd[2438]: pam_unix(sshd:session): session closed for user core Jun 25 18:20:47.343192 systemd[1]: sshd@3-172.31.30.218:22-139.178.89.65:52808.service: Deactivated successfully. Jun 25 18:20:47.349094 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:20:47.350836 systemd-logind[2117]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:20:47.352966 systemd-logind[2117]: Removed session 4. Jun 25 18:20:47.365004 systemd[1]: Started sshd@4-172.31.30.218:22-139.178.89.65:52812.service - OpenSSH per-connection server daemon (139.178.89.65:52812). Jun 25 18:20:47.543613 sshd[2446]: Accepted publickey for core from 139.178.89.65 port 52812 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:47.545992 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:47.553526 systemd-logind[2117]: New session 5 of user core. Jun 25 18:20:47.566109 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:20:47.699068 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:20:47.700213 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:20:47.715963 sudo[2450]: pam_unix(sudo:session): session closed for user root Jun 25 18:20:47.739891 sshd[2446]: pam_unix(sshd:session): session closed for user core Jun 25 18:20:47.745674 systemd-logind[2117]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:20:47.746346 systemd[1]: sshd@4-172.31.30.218:22-139.178.89.65:52812.service: Deactivated successfully. Jun 25 18:20:47.755504 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:20:47.757121 systemd-logind[2117]: Removed session 5. Jun 25 18:20:47.773928 systemd[1]: Started sshd@5-172.31.30.218:22-139.178.89.65:52822.service - OpenSSH per-connection server daemon (139.178.89.65:52822). Jun 25 18:20:47.936194 sshd[2455]: Accepted publickey for core from 139.178.89.65 port 52822 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:47.938806 sshd[2455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:47.947369 systemd-logind[2117]: New session 6 of user core. Jun 25 18:20:47.954027 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:20:48.060321 sudo[2460]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:20:48.061431 sudo[2460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:20:48.067668 sudo[2460]: pam_unix(sudo:session): session closed for user root Jun 25 18:20:48.077720 sudo[2459]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:20:48.078241 sudo[2459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:20:48.103992 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:20:48.109224 auditctl[2463]: No rules Jun 25 18:20:48.110268 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:20:48.110900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:20:48.134409 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:20:48.174713 augenrules[2482]: No rules Jun 25 18:20:48.178136 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:20:48.182838 sudo[2459]: pam_unix(sudo:session): session closed for user root Jun 25 18:20:48.206018 sshd[2455]: pam_unix(sshd:session): session closed for user core Jun 25 18:20:48.212735 systemd-logind[2117]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:20:48.215066 systemd[1]: sshd@5-172.31.30.218:22-139.178.89.65:52822.service: Deactivated successfully. Jun 25 18:20:48.219962 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:20:48.221178 systemd-logind[2117]: Removed session 6. Jun 25 18:20:48.238972 systemd[1]: Started sshd@6-172.31.30.218:22-139.178.89.65:52832.service - OpenSSH per-connection server daemon (139.178.89.65:52832). Jun 25 18:20:48.414505 sshd[2491]: Accepted publickey for core from 139.178.89.65 port 52832 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:20:48.416961 sshd[2491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:20:48.426116 systemd-logind[2117]: New session 7 of user core. Jun 25 18:20:48.436117 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:20:48.541959 sudo[2495]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:20:48.542534 sudo[2495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:20:48.770953 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:20:48.774862 (dockerd)[2505]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:20:49.163741 dockerd[2505]: time="2024-06-25T18:20:49.163660897Z" level=info msg="Starting up" Jun 25 18:20:50.002863 dockerd[2505]: time="2024-06-25T18:20:50.002779910Z" level=info msg="Loading containers: start." Jun 25 18:20:50.180819 kernel: Initializing XFRM netlink socket Jun 25 18:20:50.244887 (udev-worker)[2517]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:20:50.335561 systemd-networkd[1695]: docker0: Link UP Jun 25 18:20:50.356570 dockerd[2505]: time="2024-06-25T18:20:50.356512683Z" level=info msg="Loading containers: done." Jun 25 18:20:50.472623 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2285377026-merged.mount: Deactivated successfully. Jun 25 18:20:50.479418 dockerd[2505]: time="2024-06-25T18:20:50.478346932Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:20:50.479418 dockerd[2505]: time="2024-06-25T18:20:50.478720720Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:20:50.479418 dockerd[2505]: time="2024-06-25T18:20:50.478961332Z" level=info msg="Daemon has completed initialization" Jun 25 18:20:50.542163 dockerd[2505]: time="2024-06-25T18:20:50.542035336Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:20:50.548841 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:20:51.687390 containerd[2137]: time="2024-06-25T18:20:51.687159042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:20:52.225552 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:20:52.232809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:20:52.399218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763862173.mount: Deactivated successfully. Jun 25 18:20:52.849158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:20:52.868787 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:20:52.984292 kubelet[2665]: E0625 18:20:52.984182 2665 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:20:52.998726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:20:52.999124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:20:54.556057 containerd[2137]: time="2024-06-25T18:20:54.555985196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:54.558205 containerd[2137]: time="2024-06-25T18:20:54.558142688Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jun 25 18:20:54.559196 containerd[2137]: time="2024-06-25T18:20:54.559112828Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:54.564895 containerd[2137]: time="2024-06-25T18:20:54.564800492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:54.567445 containerd[2137]: time="2024-06-25T18:20:54.567207296Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.87999177s" Jun 25 18:20:54.567445 containerd[2137]: time="2024-06-25T18:20:54.567265460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 18:20:54.607207 containerd[2137]: time="2024-06-25T18:20:54.607150736Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:20:56.476050 containerd[2137]: time="2024-06-25T18:20:56.475972954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:56.482502 containerd[2137]: time="2024-06-25T18:20:56.481321666Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:56.482502 containerd[2137]: time="2024-06-25T18:20:56.481609402Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jun 25 18:20:56.491967 containerd[2137]: time="2024-06-25T18:20:56.491906326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:56.494432 containerd[2137]: time="2024-06-25T18:20:56.494356210Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.887141118s" Jun 25 18:20:56.494432 containerd[2137]: time="2024-06-25T18:20:56.494423782Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 18:20:56.537124 containerd[2137]: time="2024-06-25T18:20:56.537066226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:20:57.738615 containerd[2137]: time="2024-06-25T18:20:57.738537852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:57.740763 containerd[2137]: time="2024-06-25T18:20:57.740684304Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jun 25 18:20:57.742164 containerd[2137]: time="2024-06-25T18:20:57.742039200Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:57.748529 containerd[2137]: time="2024-06-25T18:20:57.748397904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:57.751115 containerd[2137]: time="2024-06-25T18:20:57.750907836Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.213775466s" Jun 25 18:20:57.751115 containerd[2137]: time="2024-06-25T18:20:57.750977484Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 18:20:57.791589 containerd[2137]: time="2024-06-25T18:20:57.791526468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:20:59.217036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2967937111.mount: Deactivated successfully. Jun 25 18:20:59.884112 containerd[2137]: time="2024-06-25T18:20:59.884020515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:59.886647 containerd[2137]: time="2024-06-25T18:20:59.886566351Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jun 25 18:20:59.889510 containerd[2137]: time="2024-06-25T18:20:59.888860955Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:59.895844 containerd[2137]: time="2024-06-25T18:20:59.895742151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:20:59.897392 containerd[2137]: time="2024-06-25T18:20:59.897316695Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.105724947s" Jun 25 18:20:59.897392 containerd[2137]: time="2024-06-25T18:20:59.897386751Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 18:20:59.940503 containerd[2137]: time="2024-06-25T18:20:59.940406439Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:21:00.488949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767068001.mount: Deactivated successfully. Jun 25 18:21:00.497257 containerd[2137]: time="2024-06-25T18:21:00.496825958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:00.498524 containerd[2137]: time="2024-06-25T18:21:00.498443618Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 18:21:00.499955 containerd[2137]: time="2024-06-25T18:21:00.499867370Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:00.504729 containerd[2137]: time="2024-06-25T18:21:00.504634454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:00.506549 containerd[2137]: time="2024-06-25T18:21:00.506285930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.821363ms" Jun 25 18:21:00.506549 containerd[2137]: time="2024-06-25T18:21:00.506344322Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 18:21:00.547654 containerd[2137]: time="2024-06-25T18:21:00.547597502Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:21:01.136773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701011591.mount: Deactivated successfully. Jun 25 18:21:03.225726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:21:03.235860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:21:04.191054 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:04.207697 (kubelet)[2812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:21:04.318161 kubelet[2812]: E0625 18:21:04.317552 2812 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:21:04.328969 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:21:04.329370 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:21:05.282179 containerd[2137]: time="2024-06-25T18:21:05.282067722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:05.284678 containerd[2137]: time="2024-06-25T18:21:05.284586450Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 18:21:05.285768 containerd[2137]: time="2024-06-25T18:21:05.285658674Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:05.292397 containerd[2137]: time="2024-06-25T18:21:05.292284114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:05.295149 containerd[2137]: time="2024-06-25T18:21:05.295074114Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.747129956s" Jun 25 18:21:05.295835 containerd[2137]: time="2024-06-25T18:21:05.295363206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 18:21:05.339545 containerd[2137]: time="2024-06-25T18:21:05.339477258Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:21:05.911361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1409736392.mount: Deactivated successfully. Jun 25 18:21:06.426752 containerd[2137]: time="2024-06-25T18:21:06.426673267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:06.428310 containerd[2137]: time="2024-06-25T18:21:06.428253943Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jun 25 18:21:06.429824 containerd[2137]: time="2024-06-25T18:21:06.429732199Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:06.436385 containerd[2137]: time="2024-06-25T18:21:06.436286203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:06.438215 containerd[2137]: time="2024-06-25T18:21:06.438047959Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.098501581s" Jun 25 18:21:06.438215 containerd[2137]: time="2024-06-25T18:21:06.438102991Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 18:21:10.430782 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 18:21:13.802884 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:13.814940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:21:13.855201 systemd[1]: Reloading requested from client PID 2906 ('systemctl') (unit session-7.scope)... Jun 25 18:21:13.855242 systemd[1]: Reloading... Jun 25 18:21:14.065522 zram_generator::config[2946]: No configuration found. Jun 25 18:21:14.338756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:21:14.508887 systemd[1]: Reloading finished in 652 ms. Jun 25 18:21:14.606160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:21:14.606378 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:21:14.607388 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:14.615321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:21:14.924835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:14.937210 (kubelet)[3019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:21:15.029487 kubelet[3019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:21:15.029487 kubelet[3019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:21:15.029487 kubelet[3019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:21:15.030066 kubelet[3019]: I0625 18:21:15.029610 3019 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:21:16.620187 kubelet[3019]: I0625 18:21:16.620126 3019 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:21:16.620187 kubelet[3019]: I0625 18:21:16.620177 3019 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:21:16.620894 kubelet[3019]: I0625 18:21:16.620668 3019 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:21:16.648872 kubelet[3019]: I0625 18:21:16.648822 3019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:21:16.653526 kubelet[3019]: E0625 18:21:16.652989 3019 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.665474 kubelet[3019]: W0625 18:21:16.665416 3019 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:21:16.666752 kubelet[3019]: I0625 18:21:16.666703 3019 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:21:16.667427 kubelet[3019]: I0625 18:21:16.667399 3019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:21:16.667756 kubelet[3019]: I0625 18:21:16.667720 3019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:21:16.668024 kubelet[3019]: I0625 18:21:16.667782 3019 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:21:16.668024 kubelet[3019]: I0625 18:21:16.667804 3019 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:21:16.668024 kubelet[3019]: I0625 18:21:16.668008 3019 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:21:16.671106 kubelet[3019]: I0625 18:21:16.671058 3019 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:21:16.672048 kubelet[3019]: I0625 18:21:16.671625 3019 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:21:16.672048 kubelet[3019]: I0625 18:21:16.671710 3019 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:21:16.672048 kubelet[3019]: I0625 18:21:16.671739 3019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:21:16.672048 kubelet[3019]: W0625 18:21:16.671745 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-218&limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.672048 kubelet[3019]: E0625 18:21:16.671821 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-218&limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.673907 kubelet[3019]: W0625 18:21:16.673794 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.673907 kubelet[3019]: E0625 18:21:16.673869 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.674786 kubelet[3019]: I0625 18:21:16.674273 3019 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:21:16.676836 kubelet[3019]: W0625 18:21:16.676795 3019 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:21:16.677981 kubelet[3019]: I0625 18:21:16.677950 3019 server.go:1232] "Started kubelet" Jun 25 18:21:16.682058 kubelet[3019]: I0625 18:21:16.682001 3019 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:21:16.683421 kubelet[3019]: I0625 18:21:16.683374 3019 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:21:16.691523 kubelet[3019]: I0625 18:21:16.690844 3019 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:21:16.691523 kubelet[3019]: I0625 18:21:16.691336 3019 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:21:16.692503 kubelet[3019]: E0625 18:21:16.692298 3019 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-218.17dc5253949f3366", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-218", UID:"ip-172-31-30-218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-218"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 677911398, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 677911398, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-218"}': 'Post "https://172.31.30.218:6443/api/v1/namespaces/default/events": dial tcp 172.31.30.218:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:21:16.693501 kubelet[3019]: I0625 18:21:16.693428 3019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:21:16.704181 kubelet[3019]: I0625 18:21:16.704113 3019 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:21:16.704932 kubelet[3019]: I0625 18:21:16.704878 3019 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:21:16.705061 kubelet[3019]: I0625 18:21:16.705023 3019 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:21:16.709556 kubelet[3019]: W0625 18:21:16.708934 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.709556 kubelet[3019]: E0625 18:21:16.709040 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.710415 kubelet[3019]: E0625 18:21:16.710360 3019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": dial tcp 172.31.30.218:6443: connect: connection refused" interval="200ms" Jun 25 18:21:16.711424 kubelet[3019]: E0625 18:21:16.711364 3019 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:21:16.711424 kubelet[3019]: E0625 18:21:16.711427 3019 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:21:16.738951 kubelet[3019]: I0625 18:21:16.738745 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:21:16.741117 kubelet[3019]: I0625 18:21:16.741083 3019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:21:16.741471 kubelet[3019]: I0625 18:21:16.741286 3019 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:21:16.741471 kubelet[3019]: I0625 18:21:16.741360 3019 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:21:16.742037 kubelet[3019]: E0625 18:21:16.741447 3019 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:21:16.746863 kubelet[3019]: W0625 18:21:16.746816 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.747071 kubelet[3019]: E0625 18:21:16.747050 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:16.809550 kubelet[3019]: I0625 18:21:16.809509 3019 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:16.810747 kubelet[3019]: E0625 18:21:16.810716 3019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.218:6443/api/v1/nodes\": dial tcp 172.31.30.218:6443: connect: connection refused" node="ip-172-31-30-218" Jun 25 18:21:16.812948 kubelet[3019]: I0625 18:21:16.812909 3019 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:21:16.812948 kubelet[3019]: I0625 18:21:16.812947 3019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:21:16.813150 kubelet[3019]: I0625 18:21:16.812983 3019 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:21:16.819735 kubelet[3019]: I0625 18:21:16.819681 3019 policy_none.go:49] "None policy: Start" Jun 25 18:21:16.821239 kubelet[3019]: I0625 18:21:16.820716 3019 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:21:16.821239 kubelet[3019]: I0625 18:21:16.820761 3019 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:21:16.829779 kubelet[3019]: I0625 18:21:16.829736 3019 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:21:16.830341 kubelet[3019]: I0625 18:21:16.830307 3019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:21:16.839327 kubelet[3019]: E0625 18:21:16.839191 3019 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-218\" not found" Jun 25 18:21:16.842499 kubelet[3019]: I0625 18:21:16.842324 3019 topology_manager.go:215] "Topology Admit Handler" podUID="b7aacfd02749f4f162a2c6fd595f75c5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-218" Jun 25 18:21:16.845033 kubelet[3019]: I0625 18:21:16.844864 3019 topology_manager.go:215] "Topology Admit Handler" podUID="14ef605c96148bd87aa44b5bba04bdc8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:16.847370 kubelet[3019]: I0625 18:21:16.847327 3019 topology_manager.go:215] "Topology Admit Handler" podUID="b435c6c1626511aa8a50ea34546a2295" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-218" Jun 25 18:21:16.911270 kubelet[3019]: E0625 18:21:16.911140 3019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": dial tcp 172.31.30.218:6443: connect: connection refused" interval="400ms" Jun 25 18:21:17.006217 kubelet[3019]: I0625 18:21:17.006075 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b435c6c1626511aa8a50ea34546a2295-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-218\" (UID: \"b435c6c1626511aa8a50ea34546a2295\") " pod="kube-system/kube-scheduler-ip-172-31-30-218" Jun 25 18:21:17.006217 kubelet[3019]: I0625 18:21:17.006171 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-ca-certs\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:17.006217 kubelet[3019]: I0625 18:21:17.006220 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:17.006613 kubelet[3019]: I0625 18:21:17.006278 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:17.006613 kubelet[3019]: I0625 18:21:17.006329 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:17.006613 kubelet[3019]: I0625 18:21:17.006381 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:17.006613 kubelet[3019]: I0625 18:21:17.006427 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:17.006613 kubelet[3019]: I0625 18:21:17.006513 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:17.006903 kubelet[3019]: I0625 18:21:17.006570 3019 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:17.014261 kubelet[3019]: I0625 18:21:17.014054 3019 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:17.014924 kubelet[3019]: E0625 18:21:17.014889 3019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.218:6443/api/v1/nodes\": dial tcp 172.31.30.218:6443: connect: connection refused" node="ip-172-31-30-218" Jun 25 18:21:17.156521 containerd[2137]: time="2024-06-25T18:21:17.156422440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-218,Uid:b7aacfd02749f4f162a2c6fd595f75c5,Namespace:kube-system,Attempt:0,}" Jun 25 18:21:17.161805 containerd[2137]: time="2024-06-25T18:21:17.161551169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-218,Uid:14ef605c96148bd87aa44b5bba04bdc8,Namespace:kube-system,Attempt:0,}" Jun 25 18:21:17.176201 containerd[2137]: time="2024-06-25T18:21:17.176089409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-218,Uid:b435c6c1626511aa8a50ea34546a2295,Namespace:kube-system,Attempt:0,}" Jun 25 18:21:17.313025 kubelet[3019]: E0625 18:21:17.312966 3019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": dial tcp 172.31.30.218:6443: connect: connection refused" interval="800ms" Jun 25 18:21:17.418014 kubelet[3019]: I0625 18:21:17.417321 3019 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:17.418014 kubelet[3019]: E0625 18:21:17.417943 3019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.218:6443/api/v1/nodes\": dial tcp 172.31.30.218:6443: connect: connection refused" node="ip-172-31-30-218" Jun 25 18:21:17.699368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627465787.mount: Deactivated successfully. Jun 25 18:21:17.707486 containerd[2137]: time="2024-06-25T18:21:17.707337751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:21:17.712969 containerd[2137]: time="2024-06-25T18:21:17.712894147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 18:21:17.721526 containerd[2137]: time="2024-06-25T18:21:17.721392919Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:21:17.726725 containerd[2137]: time="2024-06-25T18:21:17.726616123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:21:17.728855 containerd[2137]: time="2024-06-25T18:21:17.728721715Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:21:17.730641 containerd[2137]: time="2024-06-25T18:21:17.730415863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:21:17.732771 containerd[2137]: time="2024-06-25T18:21:17.732690007Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:21:17.740572 containerd[2137]: time="2024-06-25T18:21:17.740415463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:21:17.745438 containerd[2137]: time="2024-06-25T18:21:17.744918667Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 583.211858ms" Jun 25 18:21:17.751171 containerd[2137]: time="2024-06-25T18:21:17.751084207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 594.466263ms" Jun 25 18:21:17.753318 containerd[2137]: time="2024-06-25T18:21:17.752826235Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 576.09149ms" Jun 25 18:21:17.949506 kubelet[3019]: W0625 18:21:17.949398 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.30.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:17.950222 kubelet[3019]: E0625 18:21:17.949521 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.009124 containerd[2137]: time="2024-06-25T18:21:18.008443661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:18.009124 containerd[2137]: time="2024-06-25T18:21:18.008695781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.009124 containerd[2137]: time="2024-06-25T18:21:18.008774237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:18.009124 containerd[2137]: time="2024-06-25T18:21:18.008823233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.022482 containerd[2137]: time="2024-06-25T18:21:18.022252457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:18.023841 containerd[2137]: time="2024-06-25T18:21:18.023428397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.023841 containerd[2137]: time="2024-06-25T18:21:18.023490893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:18.023841 containerd[2137]: time="2024-06-25T18:21:18.023591381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.023841 containerd[2137]: time="2024-06-25T18:21:18.023623769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:18.023841 containerd[2137]: time="2024-06-25T18:21:18.023649101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.024679 containerd[2137]: time="2024-06-25T18:21:18.023818193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:18.026325 containerd[2137]: time="2024-06-25T18:21:18.026015705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:18.114383 kubelet[3019]: E0625 18:21:18.114324 3019 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": dial tcp 172.31.30.218:6443: connect: connection refused" interval="1.6s" Jun 25 18:21:18.157252 kubelet[3019]: W0625 18:21:18.156988 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.30.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.157252 kubelet[3019]: E0625 18:21:18.157085 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.218:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.171526 kubelet[3019]: W0625 18:21:18.171370 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.30.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-218&limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.171526 kubelet[3019]: E0625 18:21:18.171487 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-218&limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.193491 containerd[2137]: time="2024-06-25T18:21:18.193082742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-218,Uid:b435c6c1626511aa8a50ea34546a2295,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc367f281e2cb6239834933ad2f6ba8ee769955eb6b80b784311725cb237b59d\"" Jun 25 18:21:18.205335 containerd[2137]: time="2024-06-25T18:21:18.204829350Z" level=info msg="CreateContainer within sandbox \"cc367f281e2cb6239834933ad2f6ba8ee769955eb6b80b784311725cb237b59d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:21:18.207300 containerd[2137]: time="2024-06-25T18:21:18.206836974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-218,Uid:14ef605c96148bd87aa44b5bba04bdc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c809031336c8fbb8d6f95f4aabb894d95f85c3a25e551f1462511cc090181811\"" Jun 25 18:21:18.215170 containerd[2137]: time="2024-06-25T18:21:18.215099862Z" level=info msg="CreateContainer within sandbox \"c809031336c8fbb8d6f95f4aabb894d95f85c3a25e551f1462511cc090181811\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:21:18.222179 kubelet[3019]: I0625 18:21:18.221623 3019 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:18.222179 kubelet[3019]: E0625 18:21:18.222113 3019 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.30.218:6443/api/v1/nodes\": dial tcp 172.31.30.218:6443: connect: connection refused" node="ip-172-31-30-218" Jun 25 18:21:18.230573 containerd[2137]: time="2024-06-25T18:21:18.224644746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-218,Uid:b7aacfd02749f4f162a2c6fd595f75c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"f594fe18ee9525b410334056db807ef5f0c80f645591e04ad02af3a5b9121f83\"" Jun 25 18:21:18.235114 containerd[2137]: time="2024-06-25T18:21:18.234888246Z" level=info msg="CreateContainer within sandbox \"f594fe18ee9525b410334056db807ef5f0c80f645591e04ad02af3a5b9121f83\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:21:18.250441 containerd[2137]: time="2024-06-25T18:21:18.250348686Z" level=info msg="CreateContainer within sandbox \"cc367f281e2cb6239834933ad2f6ba8ee769955eb6b80b784311725cb237b59d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e\"" Jun 25 18:21:18.251984 containerd[2137]: time="2024-06-25T18:21:18.251526798Z" level=info msg="StartContainer for \"9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e\"" Jun 25 18:21:18.252931 kubelet[3019]: W0625 18:21:18.252801 3019 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.30.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.252931 kubelet[3019]: E0625 18:21:18.252894 3019 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.218:6443: connect: connection refused Jun 25 18:21:18.255190 containerd[2137]: time="2024-06-25T18:21:18.255130686Z" level=info msg="CreateContainer within sandbox \"c809031336c8fbb8d6f95f4aabb894d95f85c3a25e551f1462511cc090181811\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385\"" Jun 25 18:21:18.257565 containerd[2137]: time="2024-06-25T18:21:18.256105242Z" level=info msg="StartContainer for \"4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385\"" Jun 25 18:21:18.269941 containerd[2137]: time="2024-06-25T18:21:18.269835558Z" level=info msg="CreateContainer within sandbox \"f594fe18ee9525b410334056db807ef5f0c80f645591e04ad02af3a5b9121f83\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0845b4ec4ff260a2fdb14ab0e7c688617cba369082e2174930593d1e0c6df707\"" Jun 25 18:21:18.270710 containerd[2137]: time="2024-06-25T18:21:18.270659430Z" level=info msg="StartContainer for \"0845b4ec4ff260a2fdb14ab0e7c688617cba369082e2174930593d1e0c6df707\"" Jun 25 18:21:18.473867 containerd[2137]: time="2024-06-25T18:21:18.473124463Z" level=info msg="StartContainer for \"4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385\" returns successfully" Jun 25 18:21:18.491339 containerd[2137]: time="2024-06-25T18:21:18.490477639Z" level=info msg="StartContainer for \"0845b4ec4ff260a2fdb14ab0e7c688617cba369082e2174930593d1e0c6df707\" returns successfully" Jun 25 18:21:18.508352 containerd[2137]: time="2024-06-25T18:21:18.508285003Z" level=info msg="StartContainer for \"9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e\" returns successfully" Jun 25 18:21:19.827535 kubelet[3019]: I0625 18:21:19.825371 3019 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:22.476917 kubelet[3019]: E0625 18:21:22.476854 3019 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-218\" not found" node="ip-172-31-30-218" Jun 25 18:21:22.530070 kubelet[3019]: I0625 18:21:22.530002 3019 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-218" Jun 25 18:21:22.589020 kubelet[3019]: E0625 18:21:22.588867 3019 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-218.17dc5253949f3366", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-218", UID:"ip-172-31-30-218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-218"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 677911398, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 677911398, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-218"}': 'namespaces "default" not found' (will not retry!) Jun 25 18:21:22.664862 kubelet[3019]: E0625 18:21:22.664418 3019 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-30-218.17dc5253969e474e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-30-218", UID:"ip-172-31-30-218", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-30-218"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 711405390, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 21, 16, 711405390, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-30-218"}': 'namespaces "default" not found' (will not retry!) Jun 25 18:21:22.677034 kubelet[3019]: I0625 18:21:22.676956 3019 apiserver.go:52] "Watching apiserver" Jun 25 18:21:22.707485 kubelet[3019]: I0625 18:21:22.705503 3019 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:21:23.654108 update_engine[2118]: I0625 18:21:23.653640 2118 update_attempter.cc:509] Updating boot flags... Jun 25 18:21:23.893597 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3309) Jun 25 18:21:24.398556 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3311) Jun 25 18:21:25.612949 systemd[1]: Reloading requested from client PID 3478 ('systemctl') (unit session-7.scope)... Jun 25 18:21:25.612984 systemd[1]: Reloading... Jun 25 18:21:25.788523 zram_generator::config[3519]: No configuration found. Jun 25 18:21:26.080166 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:21:26.276349 systemd[1]: Reloading finished in 662 ms. Jun 25 18:21:26.343117 kubelet[3019]: I0625 18:21:26.342941 3019 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:21:26.343966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:21:26.363127 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:21:26.365067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:26.374162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:21:26.798766 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:21:26.811970 (kubelet)[3586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:21:26.949509 kubelet[3586]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:21:26.949509 kubelet[3586]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:21:26.949509 kubelet[3586]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:21:26.949509 kubelet[3586]: I0625 18:21:26.948637 3586 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:21:26.959440 kubelet[3586]: I0625 18:21:26.959382 3586 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:21:26.959440 kubelet[3586]: I0625 18:21:26.959433 3586 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:21:26.959977 kubelet[3586]: I0625 18:21:26.959801 3586 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:21:26.965163 kubelet[3586]: I0625 18:21:26.965034 3586 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:21:26.969056 kubelet[3586]: I0625 18:21:26.968004 3586 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:21:26.987101 kubelet[3586]: W0625 18:21:26.986625 3586 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 18:21:26.990342 kubelet[3586]: I0625 18:21:26.990271 3586 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:21:26.991708 kubelet[3586]: I0625 18:21:26.991655 3586 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:21:26.993314 kubelet[3586]: I0625 18:21:26.993229 3586 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:21:26.993541 kubelet[3586]: I0625 18:21:26.993335 3586 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:21:26.993541 kubelet[3586]: I0625 18:21:26.993359 3586 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:21:26.993541 kubelet[3586]: I0625 18:21:26.993432 3586 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:21:26.994329 kubelet[3586]: I0625 18:21:26.994280 3586 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:21:26.996502 kubelet[3586]: I0625 18:21:26.995220 3586 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:21:26.996502 kubelet[3586]: I0625 18:21:26.995306 3586 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:21:26.996502 kubelet[3586]: I0625 18:21:26.995334 3586 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:21:27.005824 kubelet[3586]: I0625 18:21:27.005759 3586 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:21:27.010482 kubelet[3586]: I0625 18:21:27.009978 3586 server.go:1232] "Started kubelet" Jun 25 18:21:27.015523 kubelet[3586]: I0625 18:21:27.015479 3586 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:21:27.017117 kubelet[3586]: I0625 18:21:27.017080 3586 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:21:27.019488 kubelet[3586]: I0625 18:21:27.019394 3586 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:21:27.020072 kubelet[3586]: I0625 18:21:27.019803 3586 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:21:27.021981 kubelet[3586]: I0625 18:21:27.021940 3586 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:21:27.039542 kubelet[3586]: I0625 18:21:27.039407 3586 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:21:27.042105 kubelet[3586]: I0625 18:21:27.041872 3586 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:21:27.056622 kubelet[3586]: I0625 18:21:27.051550 3586 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:21:27.059501 kubelet[3586]: E0625 18:21:27.059270 3586 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:21:27.059501 kubelet[3586]: E0625 18:21:27.059326 3586 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:21:27.092802 kubelet[3586]: I0625 18:21:27.092762 3586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:21:27.095985 kubelet[3586]: I0625 18:21:27.095378 3586 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:21:27.095985 kubelet[3586]: I0625 18:21:27.095427 3586 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:21:27.095985 kubelet[3586]: I0625 18:21:27.095488 3586 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:21:27.095985 kubelet[3586]: E0625 18:21:27.095595 3586 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:21:27.156855 kubelet[3586]: I0625 18:21:27.156800 3586 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-30-218" Jun 25 18:21:27.183755 kubelet[3586]: I0625 18:21:27.183640 3586 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-30-218" Jun 25 18:21:27.183932 kubelet[3586]: I0625 18:21:27.183784 3586 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-30-218" Jun 25 18:21:27.197072 kubelet[3586]: E0625 18:21:27.195792 3586 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358065 3586 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358115 3586 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358152 3586 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358439 3586 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358566 3586 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:21:27.358824 kubelet[3586]: I0625 18:21:27.358588 3586 policy_none.go:49] "None policy: Start" Jun 25 18:21:27.363774 kubelet[3586]: I0625 18:21:27.363723 3586 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:21:27.363952 kubelet[3586]: I0625 18:21:27.363786 3586 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:21:27.366677 kubelet[3586]: I0625 18:21:27.366622 3586 state_mem.go:75] "Updated machine memory state" Jun 25 18:21:27.372185 kubelet[3586]: I0625 18:21:27.371522 3586 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:21:27.376100 kubelet[3586]: I0625 18:21:27.375225 3586 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:21:27.396566 kubelet[3586]: I0625 18:21:27.396514 3586 topology_manager.go:215] "Topology Admit Handler" podUID="b435c6c1626511aa8a50ea34546a2295" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-218" Jun 25 18:21:27.396983 kubelet[3586]: I0625 18:21:27.396939 3586 topology_manager.go:215] "Topology Admit Handler" podUID="b7aacfd02749f4f162a2c6fd595f75c5" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-218" Jun 25 18:21:27.400726 kubelet[3586]: I0625 18:21:27.400680 3586 topology_manager.go:215] "Topology Admit Handler" podUID="14ef605c96148bd87aa44b5bba04bdc8" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.424601 kubelet[3586]: E0625 18:21:27.423913 3586 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-218\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459351 kubelet[3586]: I0625 18:21:27.458797 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:27.459351 kubelet[3586]: I0625 18:21:27.458872 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459351 kubelet[3586]: I0625 18:21:27.458920 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459351 kubelet[3586]: I0625 18:21:27.458965 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-ca-certs\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:27.459351 kubelet[3586]: I0625 18:21:27.459011 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b7aacfd02749f4f162a2c6fd595f75c5-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-218\" (UID: \"b7aacfd02749f4f162a2c6fd595f75c5\") " pod="kube-system/kube-apiserver-ip-172-31-30-218" Jun 25 18:21:27.459811 kubelet[3586]: I0625 18:21:27.459054 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459811 kubelet[3586]: I0625 18:21:27.459101 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459811 kubelet[3586]: I0625 18:21:27.459155 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14ef605c96148bd87aa44b5bba04bdc8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-218\" (UID: \"14ef605c96148bd87aa44b5bba04bdc8\") " pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:27.459811 kubelet[3586]: I0625 18:21:27.459205 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b435c6c1626511aa8a50ea34546a2295-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-218\" (UID: \"b435c6c1626511aa8a50ea34546a2295\") " pod="kube-system/kube-scheduler-ip-172-31-30-218" Jun 25 18:21:27.999134 kubelet[3586]: I0625 18:21:27.999059 3586 apiserver.go:52] "Watching apiserver" Jun 25 18:21:28.052386 kubelet[3586]: I0625 18:21:28.052268 3586 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:21:28.183028 kubelet[3586]: I0625 18:21:28.182354 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-218" podStartSLOduration=1.182276739 podCreationTimestamp="2024-06-25 18:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:21:28.173388339 +0000 UTC m=+1.350014431" watchObservedRunningTime="2024-06-25 18:21:28.182276739 +0000 UTC m=+1.358902795" Jun 25 18:21:28.206821 kubelet[3586]: E0625 18:21:28.206204 3586 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-30-218\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-30-218" Jun 25 18:21:28.218155 kubelet[3586]: E0625 18:21:28.217410 3586 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-30-218\" already exists" pod="kube-system/kube-scheduler-ip-172-31-30-218" Jun 25 18:21:28.244054 kubelet[3586]: I0625 18:21:28.243079 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-218" podStartSLOduration=1.243025072 podCreationTimestamp="2024-06-25 18:21:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:21:28.241874404 +0000 UTC m=+1.418500556" watchObservedRunningTime="2024-06-25 18:21:28.243025072 +0000 UTC m=+1.419651140" Jun 25 18:21:28.244054 kubelet[3586]: I0625 18:21:28.243230 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-218" podStartSLOduration=4.243194812 podCreationTimestamp="2024-06-25 18:21:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:21:28.212930703 +0000 UTC m=+1.389556759" watchObservedRunningTime="2024-06-25 18:21:28.243194812 +0000 UTC m=+1.419820904" Jun 25 18:21:34.180359 sudo[2495]: pam_unix(sudo:session): session closed for user root Jun 25 18:21:34.204129 sshd[2491]: pam_unix(sshd:session): session closed for user core Jun 25 18:21:34.210651 systemd[1]: sshd@6-172.31.30.218:22-139.178.89.65:52832.service: Deactivated successfully. Jun 25 18:21:34.220911 systemd-logind[2117]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:21:34.223101 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:21:34.226276 systemd-logind[2117]: Removed session 7. Jun 25 18:21:39.669546 kubelet[3586]: I0625 18:21:39.669198 3586 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:21:39.671353 containerd[2137]: time="2024-06-25T18:21:39.671180788Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:21:39.673110 kubelet[3586]: I0625 18:21:39.671956 3586 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:21:40.351899 kubelet[3586]: I0625 18:21:40.351018 3586 topology_manager.go:215] "Topology Admit Handler" podUID="002fb07d-4878-468b-8ca7-6fb050cb4f25" podNamespace="kube-system" podName="kube-proxy-kbnxh" Jun 25 18:21:40.443638 kubelet[3586]: I0625 18:21:40.443280 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/002fb07d-4878-468b-8ca7-6fb050cb4f25-xtables-lock\") pod \"kube-proxy-kbnxh\" (UID: \"002fb07d-4878-468b-8ca7-6fb050cb4f25\") " pod="kube-system/kube-proxy-kbnxh" Jun 25 18:21:40.443638 kubelet[3586]: I0625 18:21:40.443368 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/002fb07d-4878-468b-8ca7-6fb050cb4f25-lib-modules\") pod \"kube-proxy-kbnxh\" (UID: \"002fb07d-4878-468b-8ca7-6fb050cb4f25\") " pod="kube-system/kube-proxy-kbnxh" Jun 25 18:21:40.443638 kubelet[3586]: I0625 18:21:40.443417 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/002fb07d-4878-468b-8ca7-6fb050cb4f25-kube-proxy\") pod \"kube-proxy-kbnxh\" (UID: \"002fb07d-4878-468b-8ca7-6fb050cb4f25\") " pod="kube-system/kube-proxy-kbnxh" Jun 25 18:21:40.443638 kubelet[3586]: I0625 18:21:40.443495 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhtf\" (UniqueName: \"kubernetes.io/projected/002fb07d-4878-468b-8ca7-6fb050cb4f25-kube-api-access-6mhtf\") pod \"kube-proxy-kbnxh\" (UID: \"002fb07d-4878-468b-8ca7-6fb050cb4f25\") " pod="kube-system/kube-proxy-kbnxh" Jun 25 18:21:40.637381 kubelet[3586]: I0625 18:21:40.636333 3586 topology_manager.go:215] "Topology Admit Handler" podUID="5e67870b-6921-4018-8cd9-626a6ef18bdf" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-4vs2l" Jun 25 18:21:40.646532 kubelet[3586]: I0625 18:21:40.646345 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4h48\" (UniqueName: \"kubernetes.io/projected/5e67870b-6921-4018-8cd9-626a6ef18bdf-kube-api-access-j4h48\") pod \"tigera-operator-76c4974c85-4vs2l\" (UID: \"5e67870b-6921-4018-8cd9-626a6ef18bdf\") " pod="tigera-operator/tigera-operator-76c4974c85-4vs2l" Jun 25 18:21:40.646532 kubelet[3586]: I0625 18:21:40.646424 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e67870b-6921-4018-8cd9-626a6ef18bdf-var-lib-calico\") pod \"tigera-operator-76c4974c85-4vs2l\" (UID: \"5e67870b-6921-4018-8cd9-626a6ef18bdf\") " pod="tigera-operator/tigera-operator-76c4974c85-4vs2l" Jun 25 18:21:40.669947 containerd[2137]: time="2024-06-25T18:21:40.669842705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbnxh,Uid:002fb07d-4878-468b-8ca7-6fb050cb4f25,Namespace:kube-system,Attempt:0,}" Jun 25 18:21:40.743805 containerd[2137]: time="2024-06-25T18:21:40.742414338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:40.743805 containerd[2137]: time="2024-06-25T18:21:40.743422830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:40.744500 containerd[2137]: time="2024-06-25T18:21:40.743641626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:40.744500 containerd[2137]: time="2024-06-25T18:21:40.743898882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:40.829404 containerd[2137]: time="2024-06-25T18:21:40.829338306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbnxh,Uid:002fb07d-4878-468b-8ca7-6fb050cb4f25,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a80576d33263dc382f1549a266612c2a01505fe2ffe2d027e764d24392fa8fc\"" Jun 25 18:21:40.837954 containerd[2137]: time="2024-06-25T18:21:40.837888870Z" level=info msg="CreateContainer within sandbox \"9a80576d33263dc382f1549a266612c2a01505fe2ffe2d027e764d24392fa8fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:21:40.866448 containerd[2137]: time="2024-06-25T18:21:40.866389782Z" level=info msg="CreateContainer within sandbox \"9a80576d33263dc382f1549a266612c2a01505fe2ffe2d027e764d24392fa8fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08c955a38eb415485beb97a42738830df72f7268c1dbceabe6723a7869eb1e6d\"" Jun 25 18:21:40.868138 containerd[2137]: time="2024-06-25T18:21:40.867833430Z" level=info msg="StartContainer for \"08c955a38eb415485beb97a42738830df72f7268c1dbceabe6723a7869eb1e6d\"" Jun 25 18:21:40.949756 containerd[2137]: time="2024-06-25T18:21:40.949353859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-4vs2l,Uid:5e67870b-6921-4018-8cd9-626a6ef18bdf,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:21:40.974253 containerd[2137]: time="2024-06-25T18:21:40.974147515Z" level=info msg="StartContainer for \"08c955a38eb415485beb97a42738830df72f7268c1dbceabe6723a7869eb1e6d\" returns successfully" Jun 25 18:21:41.008927 containerd[2137]: time="2024-06-25T18:21:41.008487231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:41.008927 containerd[2137]: time="2024-06-25T18:21:41.008722095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:41.009801 containerd[2137]: time="2024-06-25T18:21:41.009144783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:41.009801 containerd[2137]: time="2024-06-25T18:21:41.009221475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:41.151126 containerd[2137]: time="2024-06-25T18:21:41.150873412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-4vs2l,Uid:5e67870b-6921-4018-8cd9-626a6ef18bdf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bc83a5ee8e3d7a7814040eaa6d1ea6bde280551dd72aba83c5b50eed9ce4053d\"" Jun 25 18:21:41.159164 containerd[2137]: time="2024-06-25T18:21:41.158830528Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:21:42.551149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1811536109.mount: Deactivated successfully. Jun 25 18:21:43.232834 containerd[2137]: time="2024-06-25T18:21:43.232742898Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:43.234538 containerd[2137]: time="2024-06-25T18:21:43.234433410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473626" Jun 25 18:21:43.236182 containerd[2137]: time="2024-06-25T18:21:43.236106858Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:43.242745 containerd[2137]: time="2024-06-25T18:21:43.242656302Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:43.244410 containerd[2137]: time="2024-06-25T18:21:43.244282206Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.08523569s" Jun 25 18:21:43.244410 containerd[2137]: time="2024-06-25T18:21:43.244343706Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 18:21:43.247656 containerd[2137]: time="2024-06-25T18:21:43.247409730Z" level=info msg="CreateContainer within sandbox \"bc83a5ee8e3d7a7814040eaa6d1ea6bde280551dd72aba83c5b50eed9ce4053d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:21:43.278519 containerd[2137]: time="2024-06-25T18:21:43.278385726Z" level=info msg="CreateContainer within sandbox \"bc83a5ee8e3d7a7814040eaa6d1ea6bde280551dd72aba83c5b50eed9ce4053d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c\"" Jun 25 18:21:43.279385 containerd[2137]: time="2024-06-25T18:21:43.279076338Z" level=info msg="StartContainer for \"bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c\"" Jun 25 18:21:43.382093 containerd[2137]: time="2024-06-25T18:21:43.381900271Z" level=info msg="StartContainer for \"bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c\" returns successfully" Jun 25 18:21:44.243021 kubelet[3586]: I0625 18:21:44.242492 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-4vs2l" podStartSLOduration=2.151875353 podCreationTimestamp="2024-06-25 18:21:40 +0000 UTC" firstStartedPulling="2024-06-25 18:21:41.154269592 +0000 UTC m=+14.330895648" lastFinishedPulling="2024-06-25 18:21:43.244791678 +0000 UTC m=+16.421417734" observedRunningTime="2024-06-25 18:21:44.242299915 +0000 UTC m=+17.418925983" watchObservedRunningTime="2024-06-25 18:21:44.242397439 +0000 UTC m=+17.419023507" Jun 25 18:21:44.243021 kubelet[3586]: I0625 18:21:44.243608 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-kbnxh" podStartSLOduration=4.242824927 podCreationTimestamp="2024-06-25 18:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:21:41.233439496 +0000 UTC m=+14.410065552" watchObservedRunningTime="2024-06-25 18:21:44.242824927 +0000 UTC m=+17.419451007" Jun 25 18:21:48.737749 kubelet[3586]: I0625 18:21:48.734209 3586 topology_manager.go:215] "Topology Admit Handler" podUID="2220c915-9568-4309-a229-c5e70714c896" podNamespace="calico-system" podName="calico-typha-9bb4695b9-s5mp5" Jun 25 18:21:48.885198 kubelet[3586]: I0625 18:21:48.885118 3586 topology_manager.go:215] "Topology Admit Handler" podUID="6bcc4f4d-85cf-44ee-b8de-f966822c7929" podNamespace="calico-system" podName="calico-node-mllsj" Jun 25 18:21:48.901746 kubelet[3586]: I0625 18:21:48.901673 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2220c915-9568-4309-a229-c5e70714c896-tigera-ca-bundle\") pod \"calico-typha-9bb4695b9-s5mp5\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " pod="calico-system/calico-typha-9bb4695b9-s5mp5" Jun 25 18:21:48.901746 kubelet[3586]: I0625 18:21:48.901755 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2220c915-9568-4309-a229-c5e70714c896-typha-certs\") pod \"calico-typha-9bb4695b9-s5mp5\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " pod="calico-system/calico-typha-9bb4695b9-s5mp5" Jun 25 18:21:48.901982 kubelet[3586]: I0625 18:21:48.901809 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-flexvol-driver-host\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.901982 kubelet[3586]: I0625 18:21:48.901858 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvdvl\" (UniqueName: \"kubernetes.io/projected/6bcc4f4d-85cf-44ee-b8de-f966822c7929-kube-api-access-gvdvl\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.901982 kubelet[3586]: I0625 18:21:48.901912 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrfn4\" (UniqueName: \"kubernetes.io/projected/2220c915-9568-4309-a229-c5e70714c896-kube-api-access-xrfn4\") pod \"calico-typha-9bb4695b9-s5mp5\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " pod="calico-system/calico-typha-9bb4695b9-s5mp5" Jun 25 18:21:48.901982 kubelet[3586]: I0625 18:21:48.901958 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-policysync\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902208 kubelet[3586]: I0625 18:21:48.902007 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-lib-calico\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902208 kubelet[3586]: I0625 18:21:48.902054 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-xtables-lock\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902208 kubelet[3586]: I0625 18:21:48.902102 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-lib-modules\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902208 kubelet[3586]: I0625 18:21:48.902147 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-bin-dir\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902208 kubelet[3586]: I0625 18:21:48.902196 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-net-dir\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902499 kubelet[3586]: I0625 18:21:48.902241 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-log-dir\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902499 kubelet[3586]: I0625 18:21:48.902285 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcc4f4d-85cf-44ee-b8de-f966822c7929-tigera-ca-bundle\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902499 kubelet[3586]: I0625 18:21:48.902327 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-run-calico\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:48.902499 kubelet[3586]: I0625 18:21:48.902371 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bcc4f4d-85cf-44ee-b8de-f966822c7929-node-certs\") pod \"calico-node-mllsj\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " pod="calico-system/calico-node-mllsj" Jun 25 18:21:49.031712 kubelet[3586]: E0625 18:21:49.029593 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.031712 kubelet[3586]: W0625 18:21:49.029631 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.031712 kubelet[3586]: E0625 18:21:49.029673 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.036493 kubelet[3586]: E0625 18:21:49.032787 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.036493 kubelet[3586]: W0625 18:21:49.032825 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.036493 kubelet[3586]: E0625 18:21:49.032865 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.040487 kubelet[3586]: E0625 18:21:49.037541 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.040487 kubelet[3586]: W0625 18:21:49.037590 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.040487 kubelet[3586]: E0625 18:21:49.037626 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.064588 kubelet[3586]: E0625 18:21:49.062707 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.064588 kubelet[3586]: W0625 18:21:49.062761 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.064588 kubelet[3586]: E0625 18:21:49.062799 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.096508 kubelet[3586]: E0625 18:21:49.095638 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.096508 kubelet[3586]: W0625 18:21:49.095737 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.096508 kubelet[3586]: E0625 18:21:49.095932 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.112394 kubelet[3586]: E0625 18:21:49.111121 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.112394 kubelet[3586]: W0625 18:21:49.111162 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.112394 kubelet[3586]: E0625 18:21:49.111201 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.141524 kubelet[3586]: E0625 18:21:49.140732 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.141524 kubelet[3586]: W0625 18:21:49.140763 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.141524 kubelet[3586]: E0625 18:21:49.140800 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.155193 kubelet[3586]: I0625 18:21:49.155110 3586 topology_manager.go:215] "Topology Admit Handler" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" podNamespace="calico-system" podName="csi-node-driver-66nfl" Jun 25 18:21:49.156797 kubelet[3586]: E0625 18:21:49.156745 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:21:49.208494 kubelet[3586]: E0625 18:21:49.206108 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.208494 kubelet[3586]: W0625 18:21:49.206139 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.208494 kubelet[3586]: E0625 18:21:49.206181 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.209256 kubelet[3586]: E0625 18:21:49.209009 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.209256 kubelet[3586]: W0625 18:21:49.209061 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.209256 kubelet[3586]: E0625 18:21:49.209100 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.210045 kubelet[3586]: E0625 18:21:49.209832 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.210045 kubelet[3586]: W0625 18:21:49.209858 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.210045 kubelet[3586]: E0625 18:21:49.209892 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.210905 kubelet[3586]: E0625 18:21:49.210637 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.210905 kubelet[3586]: W0625 18:21:49.210666 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.210905 kubelet[3586]: E0625 18:21:49.210703 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.212091 kubelet[3586]: E0625 18:21:49.212011 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.212361 kubelet[3586]: W0625 18:21:49.212331 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.212683 kubelet[3586]: E0625 18:21:49.212525 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.213021 kubelet[3586]: E0625 18:21:49.212999 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.213158 kubelet[3586]: W0625 18:21:49.213134 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.213567 kubelet[3586]: E0625 18:21:49.213246 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.213923 kubelet[3586]: E0625 18:21:49.213895 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.214228 kubelet[3586]: W0625 18:21:49.214032 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.214228 kubelet[3586]: E0625 18:21:49.214074 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.214749 kubelet[3586]: E0625 18:21:49.214724 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.216439 kubelet[3586]: W0625 18:21:49.216079 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.216439 kubelet[3586]: E0625 18:21:49.216158 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.217020 kubelet[3586]: E0625 18:21:49.216993 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.217211 kubelet[3586]: W0625 18:21:49.217181 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.217594 kubelet[3586]: E0625 18:21:49.217320 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.217594 kubelet[3586]: I0625 18:21:49.217371 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad1ae947-67f0-4071-a385-d1029b7ab538-kubelet-dir\") pod \"csi-node-driver-66nfl\" (UID: \"ad1ae947-67f0-4071-a385-d1029b7ab538\") " pod="calico-system/csi-node-driver-66nfl" Jun 25 18:21:49.218016 kubelet[3586]: E0625 18:21:49.217989 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.218186 kubelet[3586]: W0625 18:21:49.218160 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.218803 kubelet[3586]: E0625 18:21:49.218541 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.218803 kubelet[3586]: I0625 18:21:49.218598 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad1ae947-67f0-4071-a385-d1029b7ab538-varrun\") pod \"csi-node-driver-66nfl\" (UID: \"ad1ae947-67f0-4071-a385-d1029b7ab538\") " pod="calico-system/csi-node-driver-66nfl" Jun 25 18:21:49.219691 kubelet[3586]: E0625 18:21:49.219656 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.220064 kubelet[3586]: W0625 18:21:49.219865 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.220064 kubelet[3586]: E0625 18:21:49.219921 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.221508 kubelet[3586]: E0625 18:21:49.221305 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.221647 kubelet[3586]: W0625 18:21:49.221439 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.221711 kubelet[3586]: E0625 18:21:49.221667 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.223805 containerd[2137]: time="2024-06-25T18:21:49.222856380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mllsj,Uid:6bcc4f4d-85cf-44ee-b8de-f966822c7929,Namespace:calico-system,Attempt:0,}" Jun 25 18:21:49.225822 kubelet[3586]: E0625 18:21:49.223680 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.225822 kubelet[3586]: W0625 18:21:49.223743 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.225822 kubelet[3586]: E0625 18:21:49.224367 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.227693 kubelet[3586]: E0625 18:21:49.226875 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.227693 kubelet[3586]: W0625 18:21:49.227026 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.227693 kubelet[3586]: E0625 18:21:49.227384 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.228112 kubelet[3586]: E0625 18:21:49.228046 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.228112 kubelet[3586]: W0625 18:21:49.228079 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.229711 kubelet[3586]: E0625 18:21:49.228924 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.230252 kubelet[3586]: E0625 18:21:49.230220 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.230578 kubelet[3586]: W0625 18:21:49.230394 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.230578 kubelet[3586]: E0625 18:21:49.230439 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.231196 kubelet[3586]: E0625 18:21:49.231051 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.231196 kubelet[3586]: W0625 18:21:49.231077 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.231196 kubelet[3586]: E0625 18:21:49.231112 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.231999 kubelet[3586]: E0625 18:21:49.231770 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.231999 kubelet[3586]: W0625 18:21:49.231797 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.231999 kubelet[3586]: E0625 18:21:49.231828 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.232744 kubelet[3586]: E0625 18:21:49.232439 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.232744 kubelet[3586]: W0625 18:21:49.232496 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.232744 kubelet[3586]: E0625 18:21:49.232601 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.234275 kubelet[3586]: E0625 18:21:49.234110 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.234275 kubelet[3586]: W0625 18:21:49.234142 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.234275 kubelet[3586]: E0625 18:21:49.234180 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.235169 kubelet[3586]: E0625 18:21:49.234929 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.235169 kubelet[3586]: W0625 18:21:49.234958 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.235169 kubelet[3586]: E0625 18:21:49.234991 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.235614 kubelet[3586]: E0625 18:21:49.235589 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.235882 kubelet[3586]: W0625 18:21:49.235730 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.235882 kubelet[3586]: E0625 18:21:49.235772 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.236384 kubelet[3586]: E0625 18:21:49.236358 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.236697 kubelet[3586]: W0625 18:21:49.236562 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.236697 kubelet[3586]: E0625 18:21:49.236608 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.237363 kubelet[3586]: E0625 18:21:49.237214 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.237363 kubelet[3586]: W0625 18:21:49.237247 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.237363 kubelet[3586]: E0625 18:21:49.237280 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.238213 kubelet[3586]: E0625 18:21:49.237998 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.238213 kubelet[3586]: W0625 18:21:49.238027 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.238213 kubelet[3586]: E0625 18:21:49.238060 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.238937 kubelet[3586]: E0625 18:21:49.238764 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.238937 kubelet[3586]: W0625 18:21:49.238791 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.238937 kubelet[3586]: E0625 18:21:49.238846 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.322052 kubelet[3586]: E0625 18:21:49.321889 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.322052 kubelet[3586]: W0625 18:21:49.321924 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.322052 kubelet[3586]: E0625 18:21:49.321961 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.327557 containerd[2137]: time="2024-06-25T18:21:49.320977596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:49.327557 containerd[2137]: time="2024-06-25T18:21:49.321105216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:49.327557 containerd[2137]: time="2024-06-25T18:21:49.321148872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:49.327557 containerd[2137]: time="2024-06-25T18:21:49.321182772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:49.327836 kubelet[3586]: E0625 18:21:49.326210 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.327836 kubelet[3586]: W0625 18:21:49.326247 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.327836 kubelet[3586]: E0625 18:21:49.326295 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.327836 kubelet[3586]: I0625 18:21:49.326397 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtmdv\" (UniqueName: \"kubernetes.io/projected/ad1ae947-67f0-4071-a385-d1029b7ab538-kube-api-access-gtmdv\") pod \"csi-node-driver-66nfl\" (UID: \"ad1ae947-67f0-4071-a385-d1029b7ab538\") " pod="calico-system/csi-node-driver-66nfl" Jun 25 18:21:49.333615 kubelet[3586]: E0625 18:21:49.331996 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.333615 kubelet[3586]: W0625 18:21:49.332233 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.333615 kubelet[3586]: E0625 18:21:49.333185 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.337075 kubelet[3586]: I0625 18:21:49.333990 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad1ae947-67f0-4071-a385-d1029b7ab538-socket-dir\") pod \"csi-node-driver-66nfl\" (UID: \"ad1ae947-67f0-4071-a385-d1029b7ab538\") " pod="calico-system/csi-node-driver-66nfl" Jun 25 18:21:49.340526 kubelet[3586]: E0625 18:21:49.339485 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.340526 kubelet[3586]: W0625 18:21:49.339660 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.341481 kubelet[3586]: E0625 18:21:49.341046 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.341481 kubelet[3586]: W0625 18:21:49.341202 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.342379 kubelet[3586]: E0625 18:21:49.342321 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.342379 kubelet[3586]: W0625 18:21:49.342358 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.343797 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.348673 kubelet[3586]: W0625 18:21:49.344404 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.344492 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.348673 kubelet[3586]: I0625 18:21:49.344598 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad1ae947-67f0-4071-a385-d1029b7ab538-registration-dir\") pod \"csi-node-driver-66nfl\" (UID: \"ad1ae947-67f0-4071-a385-d1029b7ab538\") " pod="calico-system/csi-node-driver-66nfl" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.346190 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.348673 kubelet[3586]: W0625 18:21:49.346231 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.346275 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.346311 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.348673 kubelet[3586]: E0625 18:21:49.347025 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.351474 kubelet[3586]: E0625 18:21:49.346196 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.353531 kubelet[3586]: E0625 18:21:49.352205 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.353531 kubelet[3586]: W0625 18:21:49.352243 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.353531 kubelet[3586]: E0625 18:21:49.352298 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.355632 kubelet[3586]: E0625 18:21:49.355076 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.355632 kubelet[3586]: W0625 18:21:49.355105 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.355632 kubelet[3586]: E0625 18:21:49.355143 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.358231 kubelet[3586]: E0625 18:21:49.358007 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.358231 kubelet[3586]: W0625 18:21:49.358046 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.361208 kubelet[3586]: E0625 18:21:49.360129 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.361208 kubelet[3586]: E0625 18:21:49.360857 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.361208 kubelet[3586]: W0625 18:21:49.361009 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.362513 kubelet[3586]: E0625 18:21:49.361706 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.362513 kubelet[3586]: E0625 18:21:49.361950 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.362513 kubelet[3586]: W0625 18:21:49.362022 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.362513 kubelet[3586]: E0625 18:21:49.362089 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.364757 kubelet[3586]: E0625 18:21:49.364702 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.364757 kubelet[3586]: W0625 18:21:49.364753 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.364978 kubelet[3586]: E0625 18:21:49.364799 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.367006 kubelet[3586]: E0625 18:21:49.365176 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.367006 kubelet[3586]: W0625 18:21:49.365205 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.367006 kubelet[3586]: E0625 18:21:49.365251 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.367006 kubelet[3586]: E0625 18:21:49.365903 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.367006 kubelet[3586]: W0625 18:21:49.365928 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.367006 kubelet[3586]: E0625 18:21:49.366093 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.367888 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.372395 kubelet[3586]: W0625 18:21:49.367914 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.367963 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.370833 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.372395 kubelet[3586]: W0625 18:21:49.370859 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.370931 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.371283 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.372395 kubelet[3586]: W0625 18:21:49.371303 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.372395 kubelet[3586]: E0625 18:21:49.371331 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.376194 containerd[2137]: time="2024-06-25T18:21:49.375062053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9bb4695b9-s5mp5,Uid:2220c915-9568-4309-a229-c5e70714c896,Namespace:calico-system,Attempt:0,}" Jun 25 18:21:49.447919 kubelet[3586]: E0625 18:21:49.447551 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.447919 kubelet[3586]: W0625 18:21:49.447603 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.447919 kubelet[3586]: E0625 18:21:49.447644 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.449926 kubelet[3586]: E0625 18:21:49.449107 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.449926 kubelet[3586]: W0625 18:21:49.449141 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.449926 kubelet[3586]: E0625 18:21:49.449187 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.451123 kubelet[3586]: E0625 18:21:49.450692 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.451123 kubelet[3586]: W0625 18:21:49.450720 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.451123 kubelet[3586]: E0625 18:21:49.451022 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.453797 kubelet[3586]: E0625 18:21:49.453755 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.453797 kubelet[3586]: W0625 18:21:49.453790 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.455497 kubelet[3586]: E0625 18:21:49.454658 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.455497 kubelet[3586]: E0625 18:21:49.455096 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.455497 kubelet[3586]: W0625 18:21:49.455116 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.455497 kubelet[3586]: E0625 18:21:49.455184 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.456973 kubelet[3586]: E0625 18:21:49.456103 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.456973 kubelet[3586]: W0625 18:21:49.456138 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.456973 kubelet[3586]: E0625 18:21:49.456629 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.459017 kubelet[3586]: E0625 18:21:49.457564 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.459017 kubelet[3586]: W0625 18:21:49.457598 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.459017 kubelet[3586]: E0625 18:21:49.457927 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.459267 kubelet[3586]: E0625 18:21:49.459075 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.459267 kubelet[3586]: W0625 18:21:49.459098 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.460530 kubelet[3586]: E0625 18:21:49.459643 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.461399 kubelet[3586]: E0625 18:21:49.460915 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.461399 kubelet[3586]: W0625 18:21:49.461057 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.461593 kubelet[3586]: E0625 18:21:49.461446 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.463943 kubelet[3586]: E0625 18:21:49.462140 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.463943 kubelet[3586]: W0625 18:21:49.462170 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.463943 kubelet[3586]: E0625 18:21:49.462706 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.463943 kubelet[3586]: E0625 18:21:49.463191 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.463943 kubelet[3586]: W0625 18:21:49.463271 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.463943 kubelet[3586]: E0625 18:21:49.463528 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.466496 kubelet[3586]: E0625 18:21:49.465193 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.466496 kubelet[3586]: W0625 18:21:49.465228 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.466496 kubelet[3586]: E0625 18:21:49.465908 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.468488 kubelet[3586]: E0625 18:21:49.466846 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.468488 kubelet[3586]: W0625 18:21:49.466885 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.468488 kubelet[3586]: E0625 18:21:49.466954 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.468744 kubelet[3586]: E0625 18:21:49.468645 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.468744 kubelet[3586]: W0625 18:21:49.468669 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.468744 kubelet[3586]: E0625 18:21:49.468704 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.473617 kubelet[3586]: E0625 18:21:49.471109 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.473617 kubelet[3586]: W0625 18:21:49.471166 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.473617 kubelet[3586]: E0625 18:21:49.471231 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.496519 containerd[2137]: time="2024-06-25T18:21:49.485198785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:49.496519 containerd[2137]: time="2024-06-25T18:21:49.485282401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:49.496519 containerd[2137]: time="2024-06-25T18:21:49.485321545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:49.496519 containerd[2137]: time="2024-06-25T18:21:49.485346253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:49.512637 kubelet[3586]: E0625 18:21:49.512585 3586 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:21:49.512837 kubelet[3586]: W0625 18:21:49.512811 3586 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:21:49.513002 kubelet[3586]: E0625 18:21:49.512983 3586 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:21:49.539909 containerd[2137]: time="2024-06-25T18:21:49.539850985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mllsj,Uid:6bcc4f4d-85cf-44ee-b8de-f966822c7929,Namespace:calico-system,Attempt:0,} returns sandbox id \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\"" Jun 25 18:21:49.546612 containerd[2137]: time="2024-06-25T18:21:49.546051469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:21:49.659680 containerd[2137]: time="2024-06-25T18:21:49.658671374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9bb4695b9-s5mp5,Uid:2220c915-9568-4309-a229-c5e70714c896,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\"" Jun 25 18:21:51.086207 containerd[2137]: time="2024-06-25T18:21:51.084988969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:51.090605 containerd[2137]: time="2024-06-25T18:21:51.089421289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 18:21:51.091055 containerd[2137]: time="2024-06-25T18:21:51.090939745Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:51.097696 kubelet[3586]: E0625 18:21:51.097625 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:21:51.100886 containerd[2137]: time="2024-06-25T18:21:51.100073161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:51.109720 containerd[2137]: time="2024-06-25T18:21:51.108782377Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.562669s" Jun 25 18:21:51.109720 containerd[2137]: time="2024-06-25T18:21:51.108861637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 18:21:51.111178 containerd[2137]: time="2024-06-25T18:21:51.111084625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:21:51.120242 containerd[2137]: time="2024-06-25T18:21:51.120175849Z" level=info msg="CreateContainer within sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:21:51.170544 containerd[2137]: time="2024-06-25T18:21:51.170430913Z" level=info msg="CreateContainer within sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\"" Jun 25 18:21:51.176797 containerd[2137]: time="2024-06-25T18:21:51.176718853Z" level=info msg="StartContainer for \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\"" Jun 25 18:21:51.420227 containerd[2137]: time="2024-06-25T18:21:51.419397147Z" level=info msg="StartContainer for \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\" returns successfully" Jun 25 18:21:51.571080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05-rootfs.mount: Deactivated successfully. Jun 25 18:21:52.052378 containerd[2137]: time="2024-06-25T18:21:52.052102946Z" level=info msg="shim disconnected" id=a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05 namespace=k8s.io Jun 25 18:21:52.052378 containerd[2137]: time="2024-06-25T18:21:52.052186946Z" level=warning msg="cleaning up after shim disconnected" id=a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05 namespace=k8s.io Jun 25 18:21:52.052378 containerd[2137]: time="2024-06-25T18:21:52.052211258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:21:52.299725 containerd[2137]: time="2024-06-25T18:21:52.299617671Z" level=info msg="StopPodSandbox for \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\"" Jun 25 18:21:52.300331 containerd[2137]: time="2024-06-25T18:21:52.299709327Z" level=info msg="Container to stop \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:21:52.313378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c-shm.mount: Deactivated successfully. Jun 25 18:21:52.430127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c-rootfs.mount: Deactivated successfully. Jun 25 18:21:52.448798 containerd[2137]: time="2024-06-25T18:21:52.447760492Z" level=info msg="shim disconnected" id=f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c namespace=k8s.io Jun 25 18:21:52.448798 containerd[2137]: time="2024-06-25T18:21:52.447843508Z" level=warning msg="cleaning up after shim disconnected" id=f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c namespace=k8s.io Jun 25 18:21:52.448798 containerd[2137]: time="2024-06-25T18:21:52.447864280Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:21:52.506662 containerd[2137]: time="2024-06-25T18:21:52.504960700Z" level=info msg="TearDown network for sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" successfully" Jun 25 18:21:52.509119 containerd[2137]: time="2024-06-25T18:21:52.508433884Z" level=info msg="StopPodSandbox for \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" returns successfully" Jun 25 18:21:52.593791 kubelet[3586]: I0625 18:21:52.593618 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-policysync\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.594658 kubelet[3586]: I0625 18:21:52.594513 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-policysync" (OuterVolumeSpecName: "policysync") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.595406 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-lib-modules\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.595572 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bcc4f4d-85cf-44ee-b8de-f966822c7929-node-certs\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.596662 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.596838 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-xtables-lock\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.596894 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-net-dir\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.598937 kubelet[3586]: I0625 18:21:52.596972 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-log-dir\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.597135 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-lib-calico\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.597182 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-bin-dir\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.598006 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcc4f4d-85cf-44ee-b8de-f966822c7929-tigera-ca-bundle\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.598062 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-flexvol-driver-host\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.598113 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvdvl\" (UniqueName: \"kubernetes.io/projected/6bcc4f4d-85cf-44ee-b8de-f966822c7929-kube-api-access-gvdvl\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599631 kubelet[3586]: I0625 18:21:52.598157 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-run-calico\") pod \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\" (UID: \"6bcc4f4d-85cf-44ee-b8de-f966822c7929\") " Jun 25 18:21:52.599983 kubelet[3586]: I0625 18:21:52.598233 3586 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-policysync\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.599983 kubelet[3586]: I0625 18:21:52.598262 3586 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-lib-modules\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.607289 kubelet[3586]: I0625 18:21:52.597434 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.607289 kubelet[3586]: I0625 18:21:52.597906 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.607289 kubelet[3586]: I0625 18:21:52.603935 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.607289 kubelet[3586]: I0625 18:21:52.604089 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.607289 kubelet[3586]: I0625 18:21:52.604130 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.611157 kubelet[3586]: I0625 18:21:52.605506 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.611157 kubelet[3586]: I0625 18:21:52.606233 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 25 18:21:52.612794 kubelet[3586]: I0625 18:21:52.612717 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcc4f4d-85cf-44ee-b8de-f966822c7929-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:21:52.634374 systemd[1]: var-lib-kubelet-pods-6bcc4f4d\x2d85cf\x2d44ee\x2db8de\x2df966822c7929-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgvdvl.mount: Deactivated successfully. Jun 25 18:21:52.637376 systemd[1]: var-lib-kubelet-pods-6bcc4f4d\x2d85cf\x2d44ee\x2db8de\x2df966822c7929-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jun 25 18:21:52.638776 kubelet[3586]: I0625 18:21:52.638375 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcc4f4d-85cf-44ee-b8de-f966822c7929-kube-api-access-gvdvl" (OuterVolumeSpecName: "kube-api-access-gvdvl") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "kube-api-access-gvdvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:21:52.643330 kubelet[3586]: I0625 18:21:52.641964 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcc4f4d-85cf-44ee-b8de-f966822c7929-node-certs" (OuterVolumeSpecName: "node-certs") pod "6bcc4f4d-85cf-44ee-b8de-f966822c7929" (UID: "6bcc4f4d-85cf-44ee-b8de-f966822c7929"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:21:52.704704 kubelet[3586]: I0625 18:21:52.704644 3586 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-bin-dir\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705057 kubelet[3586]: I0625 18:21:52.704898 3586 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6bcc4f4d-85cf-44ee-b8de-f966822c7929-tigera-ca-bundle\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705057 kubelet[3586]: I0625 18:21:52.704966 3586 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-flexvol-driver-host\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705580 kubelet[3586]: I0625 18:21:52.705288 3586 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-lib-calico\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705580 kubelet[3586]: I0625 18:21:52.705385 3586 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gvdvl\" (UniqueName: \"kubernetes.io/projected/6bcc4f4d-85cf-44ee-b8de-f966822c7929-kube-api-access-gvdvl\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705580 kubelet[3586]: I0625 18:21:52.705436 3586 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-var-run-calico\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705580 kubelet[3586]: I0625 18:21:52.705508 3586 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6bcc4f4d-85cf-44ee-b8de-f966822c7929-node-certs\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.705580 kubelet[3586]: I0625 18:21:52.705537 3586 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-xtables-lock\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.706193 kubelet[3586]: I0625 18:21:52.705703 3586 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-net-dir\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:52.706193 kubelet[3586]: I0625 18:21:52.705735 3586 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6bcc4f4d-85cf-44ee-b8de-f966822c7929-cni-log-dir\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:53.097496 kubelet[3586]: E0625 18:21:53.096821 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:21:53.325492 kubelet[3586]: I0625 18:21:53.325406 3586 scope.go:117] "RemoveContainer" containerID="a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05" Jun 25 18:21:53.350244 containerd[2137]: time="2024-06-25T18:21:53.349567012Z" level=info msg="RemoveContainer for \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\"" Jun 25 18:21:53.398579 containerd[2137]: time="2024-06-25T18:21:53.398282453Z" level=info msg="RemoveContainer for \"a25cbef1ac2109b3f19cc4a81db24a5efda85bac051f4493df67e275a032ca05\" returns successfully" Jun 25 18:21:53.435865 kubelet[3586]: I0625 18:21:53.435034 3586 topology_manager.go:215] "Topology Admit Handler" podUID="1cfb6ceb-3008-48c2-9150-30f53e752283" podNamespace="calico-system" podName="calico-node-4x992" Jun 25 18:21:53.445177 kubelet[3586]: E0625 18:21:53.438644 3586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6bcc4f4d-85cf-44ee-b8de-f966822c7929" containerName="flexvol-driver" Jun 25 18:21:53.445177 kubelet[3586]: I0625 18:21:53.438738 3586 memory_manager.go:346] "RemoveStaleState removing state" podUID="6bcc4f4d-85cf-44ee-b8de-f966822c7929" containerName="flexvol-driver" Jun 25 18:21:53.514133 kubelet[3586]: I0625 18:21:53.514084 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1cfb6ceb-3008-48c2-9150-30f53e752283-node-certs\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.514768 kubelet[3586]: I0625 18:21:53.514720 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-lib-modules\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515405 kubelet[3586]: I0625 18:21:53.515017 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-var-lib-calico\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515405 kubelet[3586]: I0625 18:21:53.515091 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-cni-net-dir\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515405 kubelet[3586]: I0625 18:21:53.515154 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cfb6ceb-3008-48c2-9150-30f53e752283-tigera-ca-bundle\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515405 kubelet[3586]: I0625 18:21:53.515198 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-cni-log-dir\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515405 kubelet[3586]: I0625 18:21:53.515249 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-var-run-calico\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.515946 kubelet[3586]: I0625 18:21:53.515301 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-flexvol-driver-host\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.517360 kubelet[3586]: I0625 18:21:53.515349 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd48c\" (UniqueName: \"kubernetes.io/projected/1cfb6ceb-3008-48c2-9150-30f53e752283-kube-api-access-xd48c\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.517360 kubelet[3586]: I0625 18:21:53.516549 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-policysync\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.517360 kubelet[3586]: I0625 18:21:53.516611 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-xtables-lock\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.517360 kubelet[3586]: I0625 18:21:53.516661 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1cfb6ceb-3008-48c2-9150-30f53e752283-cni-bin-dir\") pod \"calico-node-4x992\" (UID: \"1cfb6ceb-3008-48c2-9150-30f53e752283\") " pod="calico-system/calico-node-4x992" Jun 25 18:21:53.758931 containerd[2137]: time="2024-06-25T18:21:53.758620650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4x992,Uid:1cfb6ceb-3008-48c2-9150-30f53e752283,Namespace:calico-system,Attempt:0,}" Jun 25 18:21:53.863518 containerd[2137]: time="2024-06-25T18:21:53.861112531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:53.863518 containerd[2137]: time="2024-06-25T18:21:53.861212791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:53.863518 containerd[2137]: time="2024-06-25T18:21:53.861267907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:53.863518 containerd[2137]: time="2024-06-25T18:21:53.861303331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:54.151351 containerd[2137]: time="2024-06-25T18:21:54.150365824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4x992,Uid:1cfb6ceb-3008-48c2-9150-30f53e752283,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\"" Jun 25 18:21:54.165726 containerd[2137]: time="2024-06-25T18:21:54.165229564Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:21:54.209390 containerd[2137]: time="2024-06-25T18:21:54.209191181Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4\"" Jun 25 18:21:54.211408 containerd[2137]: time="2024-06-25T18:21:54.211344545Z" level=info msg="StartContainer for \"e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4\"" Jun 25 18:21:54.477132 containerd[2137]: time="2024-06-25T18:21:54.476801574Z" level=info msg="StartContainer for \"e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4\" returns successfully" Jun 25 18:21:54.665416 containerd[2137]: time="2024-06-25T18:21:54.663828259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:54.669138 containerd[2137]: time="2024-06-25T18:21:54.669080731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 18:21:54.671234 containerd[2137]: time="2024-06-25T18:21:54.671175511Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:54.682308 containerd[2137]: time="2024-06-25T18:21:54.682233391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:21:54.688171 containerd[2137]: time="2024-06-25T18:21:54.686786503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 3.575179686s" Jun 25 18:21:54.688171 containerd[2137]: time="2024-06-25T18:21:54.687407491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 18:21:54.732576 containerd[2137]: time="2024-06-25T18:21:54.731720851Z" level=info msg="CreateContainer within sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:21:54.763161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4-rootfs.mount: Deactivated successfully. Jun 25 18:21:54.886531 containerd[2137]: time="2024-06-25T18:21:54.885237812Z" level=info msg="CreateContainer within sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\"" Jun 25 18:21:54.887747 containerd[2137]: time="2024-06-25T18:21:54.887618540Z" level=info msg="StartContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\"" Jun 25 18:21:55.098411 kubelet[3586]: E0625 18:21:55.096215 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:21:55.122687 kubelet[3586]: I0625 18:21:55.122632 3586 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6bcc4f4d-85cf-44ee-b8de-f966822c7929" path="/var/lib/kubelet/pods/6bcc4f4d-85cf-44ee-b8de-f966822c7929/volumes" Jun 25 18:21:55.155364 containerd[2137]: time="2024-06-25T18:21:55.155246309Z" level=info msg="shim disconnected" id=e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4 namespace=k8s.io Jun 25 18:21:55.156201 containerd[2137]: time="2024-06-25T18:21:55.156075125Z" level=warning msg="cleaning up after shim disconnected" id=e7a35b8d5d69acd78f56ecbfcf1f31d82aabba66e7c48e680b949e2571b66dd4 namespace=k8s.io Jun 25 18:21:55.157830 containerd[2137]: time="2024-06-25T18:21:55.156270365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:21:55.161784 containerd[2137]: time="2024-06-25T18:21:55.159782225Z" level=info msg="StartContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" returns successfully" Jun 25 18:21:55.373411 containerd[2137]: time="2024-06-25T18:21:55.373204530Z" level=info msg="StopContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" with timeout 300 (s)" Jun 25 18:21:55.374572 containerd[2137]: time="2024-06-25T18:21:55.374493282Z" level=info msg="Stop container \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" with signal terminated" Jun 25 18:21:55.395809 containerd[2137]: time="2024-06-25T18:21:55.395737362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:21:55.410794 kubelet[3586]: I0625 18:21:55.407957 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-9bb4695b9-s5mp5" podStartSLOduration=2.395373541 podCreationTimestamp="2024-06-25 18:21:48 +0000 UTC" firstStartedPulling="2024-06-25 18:21:49.677085002 +0000 UTC m=+22.853711058" lastFinishedPulling="2024-06-25 18:21:54.689606587 +0000 UTC m=+27.866232715" observedRunningTime="2024-06-25 18:21:55.402167094 +0000 UTC m=+28.578793162" watchObservedRunningTime="2024-06-25 18:21:55.407895198 +0000 UTC m=+28.584521278" Jun 25 18:21:55.583298 containerd[2137]: time="2024-06-25T18:21:55.582701311Z" level=info msg="shim disconnected" id=422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd namespace=k8s.io Jun 25 18:21:55.583298 containerd[2137]: time="2024-06-25T18:21:55.582878263Z" level=warning msg="cleaning up after shim disconnected" id=422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd namespace=k8s.io Jun 25 18:21:55.583298 containerd[2137]: time="2024-06-25T18:21:55.582931087Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:21:55.649956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2811389584.mount: Deactivated successfully. Jun 25 18:21:55.665804 containerd[2137]: time="2024-06-25T18:21:55.665253044Z" level=info msg="StopContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" returns successfully" Jun 25 18:21:55.667047 containerd[2137]: time="2024-06-25T18:21:55.666820508Z" level=info msg="StopPodSandbox for \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\"" Jun 25 18:21:55.667047 containerd[2137]: time="2024-06-25T18:21:55.666909428Z" level=info msg="Container to stop \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 25 18:21:55.674268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5-shm.mount: Deactivated successfully. Jun 25 18:21:55.805091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5-rootfs.mount: Deactivated successfully. Jun 25 18:21:55.812865 containerd[2137]: time="2024-06-25T18:21:55.812325740Z" level=info msg="shim disconnected" id=1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5 namespace=k8s.io Jun 25 18:21:55.812865 containerd[2137]: time="2024-06-25T18:21:55.812414552Z" level=warning msg="cleaning up after shim disconnected" id=1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5 namespace=k8s.io Jun 25 18:21:55.812865 containerd[2137]: time="2024-06-25T18:21:55.812436884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:21:55.874712 containerd[2137]: time="2024-06-25T18:21:55.874634601Z" level=info msg="TearDown network for sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" successfully" Jun 25 18:21:55.874712 containerd[2137]: time="2024-06-25T18:21:55.874699233Z" level=info msg="StopPodSandbox for \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" returns successfully" Jun 25 18:21:55.929267 kubelet[3586]: I0625 18:21:55.929095 3586 topology_manager.go:215] "Topology Admit Handler" podUID="080d777e-ce82-484d-9d1f-ed4424b0eb5f" podNamespace="calico-system" podName="calico-typha-6bfc884c88-kvpmq" Jun 25 18:21:55.929267 kubelet[3586]: E0625 18:21:55.929226 3586 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2220c915-9568-4309-a229-c5e70714c896" containerName="calico-typha" Jun 25 18:21:55.930881 kubelet[3586]: I0625 18:21:55.929280 3586 memory_manager.go:346] "RemoveStaleState removing state" podUID="2220c915-9568-4309-a229-c5e70714c896" containerName="calico-typha" Jun 25 18:21:55.948499 kubelet[3586]: I0625 18:21:55.946141 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2220c915-9568-4309-a229-c5e70714c896-typha-certs\") pod \"2220c915-9568-4309-a229-c5e70714c896\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " Jun 25 18:21:55.948499 kubelet[3586]: I0625 18:21:55.946250 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2220c915-9568-4309-a229-c5e70714c896-tigera-ca-bundle\") pod \"2220c915-9568-4309-a229-c5e70714c896\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " Jun 25 18:21:55.948499 kubelet[3586]: I0625 18:21:55.946310 3586 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrfn4\" (UniqueName: \"kubernetes.io/projected/2220c915-9568-4309-a229-c5e70714c896-kube-api-access-xrfn4\") pod \"2220c915-9568-4309-a229-c5e70714c896\" (UID: \"2220c915-9568-4309-a229-c5e70714c896\") " Jun 25 18:21:55.982515 kubelet[3586]: I0625 18:21:55.980718 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2220c915-9568-4309-a229-c5e70714c896-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "2220c915-9568-4309-a229-c5e70714c896" (UID: "2220c915-9568-4309-a229-c5e70714c896"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 25 18:21:55.986355 kubelet[3586]: I0625 18:21:55.986083 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2220c915-9568-4309-a229-c5e70714c896-kube-api-access-xrfn4" (OuterVolumeSpecName: "kube-api-access-xrfn4") pod "2220c915-9568-4309-a229-c5e70714c896" (UID: "2220c915-9568-4309-a229-c5e70714c896"). InnerVolumeSpecName "kube-api-access-xrfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 25 18:21:55.987813 systemd[1]: var-lib-kubelet-pods-2220c915\x2d9568\x2d4309\x2da229\x2dc5e70714c896-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jun 25 18:21:55.988412 kubelet[3586]: I0625 18:21:55.988328 3586 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2220c915-9568-4309-a229-c5e70714c896-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2220c915-9568-4309-a229-c5e70714c896" (UID: "2220c915-9568-4309-a229-c5e70714c896"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 25 18:21:56.002605 systemd[1]: var-lib-kubelet-pods-2220c915\x2d9568\x2d4309\x2da229\x2dc5e70714c896-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jun 25 18:21:56.003034 systemd[1]: var-lib-kubelet-pods-2220c915\x2d9568\x2d4309\x2da229\x2dc5e70714c896-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxrfn4.mount: Deactivated successfully. Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047622 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/080d777e-ce82-484d-9d1f-ed4424b0eb5f-tigera-ca-bundle\") pod \"calico-typha-6bfc884c88-kvpmq\" (UID: \"080d777e-ce82-484d-9d1f-ed4424b0eb5f\") " pod="calico-system/calico-typha-6bfc884c88-kvpmq" Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047710 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/080d777e-ce82-484d-9d1f-ed4424b0eb5f-typha-certs\") pod \"calico-typha-6bfc884c88-kvpmq\" (UID: \"080d777e-ce82-484d-9d1f-ed4424b0eb5f\") " pod="calico-system/calico-typha-6bfc884c88-kvpmq" Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047773 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spdlx\" (UniqueName: \"kubernetes.io/projected/080d777e-ce82-484d-9d1f-ed4424b0eb5f-kube-api-access-spdlx\") pod \"calico-typha-6bfc884c88-kvpmq\" (UID: \"080d777e-ce82-484d-9d1f-ed4424b0eb5f\") " pod="calico-system/calico-typha-6bfc884c88-kvpmq" Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047866 3586 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2220c915-9568-4309-a229-c5e70714c896-tigera-ca-bundle\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047912 3586 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xrfn4\" (UniqueName: \"kubernetes.io/projected/2220c915-9568-4309-a229-c5e70714c896-kube-api-access-xrfn4\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:56.049422 kubelet[3586]: I0625 18:21:56.047941 3586 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2220c915-9568-4309-a229-c5e70714c896-typha-certs\") on node \"ip-172-31-30-218\" DevicePath \"\"" Jun 25 18:21:56.244203 containerd[2137]: time="2024-06-25T18:21:56.243748747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfc884c88-kvpmq,Uid:080d777e-ce82-484d-9d1f-ed4424b0eb5f,Namespace:calico-system,Attempt:0,}" Jun 25 18:21:56.297882 containerd[2137]: time="2024-06-25T18:21:56.296866435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:21:56.297882 containerd[2137]: time="2024-06-25T18:21:56.297032131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:56.298444 containerd[2137]: time="2024-06-25T18:21:56.297998395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:21:56.298444 containerd[2137]: time="2024-06-25T18:21:56.298077127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:21:56.399248 kubelet[3586]: I0625 18:21:56.399137 3586 scope.go:117] "RemoveContainer" containerID="422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd" Jun 25 18:21:56.408380 containerd[2137]: time="2024-06-25T18:21:56.408271879Z" level=info msg="RemoveContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\"" Jun 25 18:21:56.467718 containerd[2137]: time="2024-06-25T18:21:56.466404248Z" level=info msg="RemoveContainer for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" returns successfully" Jun 25 18:21:56.470169 kubelet[3586]: I0625 18:21:56.468945 3586 scope.go:117] "RemoveContainer" containerID="422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd" Jun 25 18:21:56.471201 containerd[2137]: time="2024-06-25T18:21:56.471032768Z" level=error msg="ContainerStatus for \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\": not found" Jun 25 18:21:56.477720 kubelet[3586]: E0625 18:21:56.477530 3586 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\": not found" containerID="422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd" Jun 25 18:21:56.477720 kubelet[3586]: I0625 18:21:56.477674 3586 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd"} err="failed to get container status \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"422f13aa0e87a596859d71371a7503e3d5dab233e6350d02197fee336af6f6dd\": not found" Jun 25 18:21:56.582215 containerd[2137]: time="2024-06-25T18:21:56.581908580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bfc884c88-kvpmq,Uid:080d777e-ce82-484d-9d1f-ed4424b0eb5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1869e236442ee5ba7d01ef4ada7350b161ea29bd5ec3c9493bbf03cdbd0128be\"" Jun 25 18:21:56.618053 containerd[2137]: time="2024-06-25T18:21:56.617961152Z" level=info msg="CreateContainer within sandbox \"1869e236442ee5ba7d01ef4ada7350b161ea29bd5ec3c9493bbf03cdbd0128be\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:21:56.667407 containerd[2137]: time="2024-06-25T18:21:56.667280073Z" level=info msg="CreateContainer within sandbox \"1869e236442ee5ba7d01ef4ada7350b161ea29bd5ec3c9493bbf03cdbd0128be\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"756f892ba8344f2fcec01847a745a833a6490e5ac15f092036863f114c54e15c\"" Jun 25 18:21:56.673588 containerd[2137]: time="2024-06-25T18:21:56.672629169Z" level=info msg="StartContainer for \"756f892ba8344f2fcec01847a745a833a6490e5ac15f092036863f114c54e15c\"" Jun 25 18:21:57.068086 containerd[2137]: time="2024-06-25T18:21:57.068018815Z" level=info msg="StartContainer for \"756f892ba8344f2fcec01847a745a833a6490e5ac15f092036863f114c54e15c\" returns successfully" Jun 25 18:21:57.102598 kubelet[3586]: E0625 18:21:57.099992 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:21:57.112032 kubelet[3586]: I0625 18:21:57.111794 3586 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2220c915-9568-4309-a229-c5e70714c896" path="/var/lib/kubelet/pods/2220c915-9568-4309-a229-c5e70714c896/volumes" Jun 25 18:21:59.096163 kubelet[3586]: E0625 18:21:59.096109 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:22:00.124439 containerd[2137]: time="2024-06-25T18:22:00.123655126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:00.125879 containerd[2137]: time="2024-06-25T18:22:00.125746570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 18:22:00.128297 containerd[2137]: time="2024-06-25T18:22:00.127301518Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:00.135494 containerd[2137]: time="2024-06-25T18:22:00.135369310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:00.137020 containerd[2137]: time="2024-06-25T18:22:00.136748182Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.740936336s" Jun 25 18:22:00.137020 containerd[2137]: time="2024-06-25T18:22:00.136823854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 18:22:00.146255 containerd[2137]: time="2024-06-25T18:22:00.146189110Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:22:00.172117 containerd[2137]: time="2024-06-25T18:22:00.172011658Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f\"" Jun 25 18:22:00.174766 containerd[2137]: time="2024-06-25T18:22:00.173376346Z" level=info msg="StartContainer for \"aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f\"" Jun 25 18:22:00.307653 containerd[2137]: time="2024-06-25T18:22:00.307553531Z" level=info msg="StartContainer for \"aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f\" returns successfully" Jun 25 18:22:00.482979 kubelet[3586]: I0625 18:22:00.482768 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6bfc884c88-kvpmq" podStartSLOduration=10.482708856 podCreationTimestamp="2024-06-25 18:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:21:57.473069349 +0000 UTC m=+30.649695453" watchObservedRunningTime="2024-06-25 18:22:00.482708856 +0000 UTC m=+33.659335008" Jun 25 18:22:01.098078 kubelet[3586]: E0625 18:22:01.096203 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:22:01.800616 containerd[2137]: time="2024-06-25T18:22:01.800442134Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:22:01.852916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f-rootfs.mount: Deactivated successfully. Jun 25 18:22:01.894656 kubelet[3586]: I0625 18:22:01.894587 3586 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:22:01.941896 kubelet[3586]: I0625 18:22:01.938158 3586 topology_manager.go:215] "Topology Admit Handler" podUID="e3061c83-3e80-4699-9cd5-cd32f59ce77c" podNamespace="kube-system" podName="coredns-5dd5756b68-8cgx2" Jun 25 18:22:01.950447 kubelet[3586]: I0625 18:22:01.950086 3586 topology_manager.go:215] "Topology Admit Handler" podUID="01707278-3a25-4665-b673-aed626240ae3" podNamespace="kube-system" podName="coredns-5dd5756b68-nr5tn" Jun 25 18:22:01.951867 kubelet[3586]: I0625 18:22:01.951795 3586 topology_manager.go:215] "Topology Admit Handler" podUID="495cd0df-7e05-4c75-b413-8145733a3fc6" podNamespace="calico-system" podName="calico-kube-controllers-7f944cc9c7-m2v8x" Jun 25 18:22:01.999059 kubelet[3586]: I0625 18:22:01.997626 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6strf\" (UniqueName: \"kubernetes.io/projected/e3061c83-3e80-4699-9cd5-cd32f59ce77c-kube-api-access-6strf\") pod \"coredns-5dd5756b68-8cgx2\" (UID: \"e3061c83-3e80-4699-9cd5-cd32f59ce77c\") " pod="kube-system/coredns-5dd5756b68-8cgx2" Jun 25 18:22:01.999059 kubelet[3586]: I0625 18:22:01.997726 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xql6m\" (UniqueName: \"kubernetes.io/projected/01707278-3a25-4665-b673-aed626240ae3-kube-api-access-xql6m\") pod \"coredns-5dd5756b68-nr5tn\" (UID: \"01707278-3a25-4665-b673-aed626240ae3\") " pod="kube-system/coredns-5dd5756b68-nr5tn" Jun 25 18:22:01.999059 kubelet[3586]: I0625 18:22:01.997807 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e3061c83-3e80-4699-9cd5-cd32f59ce77c-config-volume\") pod \"coredns-5dd5756b68-8cgx2\" (UID: \"e3061c83-3e80-4699-9cd5-cd32f59ce77c\") " pod="kube-system/coredns-5dd5756b68-8cgx2" Jun 25 18:22:01.999059 kubelet[3586]: I0625 18:22:01.997879 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01707278-3a25-4665-b673-aed626240ae3-config-volume\") pod \"coredns-5dd5756b68-nr5tn\" (UID: \"01707278-3a25-4665-b673-aed626240ae3\") " pod="kube-system/coredns-5dd5756b68-nr5tn" Jun 25 18:22:01.999059 kubelet[3586]: I0625 18:22:01.997936 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qgqw\" (UniqueName: \"kubernetes.io/projected/495cd0df-7e05-4c75-b413-8145733a3fc6-kube-api-access-5qgqw\") pod \"calico-kube-controllers-7f944cc9c7-m2v8x\" (UID: \"495cd0df-7e05-4c75-b413-8145733a3fc6\") " pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" Jun 25 18:22:01.999835 kubelet[3586]: I0625 18:22:01.998019 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/495cd0df-7e05-4c75-b413-8145733a3fc6-tigera-ca-bundle\") pod \"calico-kube-controllers-7f944cc9c7-m2v8x\" (UID: \"495cd0df-7e05-4c75-b413-8145733a3fc6\") " pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" Jun 25 18:22:02.273947 containerd[2137]: time="2024-06-25T18:22:02.273762613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8cgx2,Uid:e3061c83-3e80-4699-9cd5-cd32f59ce77c,Namespace:kube-system,Attempt:0,}" Jun 25 18:22:02.280224 containerd[2137]: time="2024-06-25T18:22:02.279685105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nr5tn,Uid:01707278-3a25-4665-b673-aed626240ae3,Namespace:kube-system,Attempt:0,}" Jun 25 18:22:02.281492 containerd[2137]: time="2024-06-25T18:22:02.281392213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f944cc9c7-m2v8x,Uid:495cd0df-7e05-4c75-b413-8145733a3fc6,Namespace:calico-system,Attempt:0,}" Jun 25 18:22:02.834286 containerd[2137]: time="2024-06-25T18:22:02.833976483Z" level=info msg="shim disconnected" id=aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f namespace=k8s.io Jun 25 18:22:02.834286 containerd[2137]: time="2024-06-25T18:22:02.834261447Z" level=warning msg="cleaning up after shim disconnected" id=aa6d3e25eac0ea6079736d10b7f10f4b85f43472d6bf3b8f9359744b0628672f namespace=k8s.io Jun 25 18:22:02.835115 containerd[2137]: time="2024-06-25T18:22:02.834496239Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:22:02.835844 containerd[2137]: time="2024-06-25T18:22:02.835770339Z" level=error msg="Failed to destroy network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.839874 containerd[2137]: time="2024-06-25T18:22:02.839794095Z" level=error msg="encountered an error cleaning up failed sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.840567 containerd[2137]: time="2024-06-25T18:22:02.840294807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8cgx2,Uid:e3061c83-3e80-4699-9cd5-cd32f59ce77c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.841350 kubelet[3586]: E0625 18:22:02.841300 3586 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.841585 kubelet[3586]: E0625 18:22:02.841393 3586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8cgx2" Jun 25 18:22:02.841585 kubelet[3586]: E0625 18:22:02.841433 3586 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-8cgx2" Jun 25 18:22:02.841585 kubelet[3586]: E0625 18:22:02.841558 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-8cgx2_kube-system(e3061c83-3e80-4699-9cd5-cd32f59ce77c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-8cgx2_kube-system(e3061c83-3e80-4699-9cd5-cd32f59ce77c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8cgx2" podUID="e3061c83-3e80-4699-9cd5-cd32f59ce77c" Jun 25 18:22:02.871895 containerd[2137]: time="2024-06-25T18:22:02.871803088Z" level=error msg="Failed to destroy network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.873507 containerd[2137]: time="2024-06-25T18:22:02.872908024Z" level=error msg="encountered an error cleaning up failed sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.873507 containerd[2137]: time="2024-06-25T18:22:02.873044080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nr5tn,Uid:01707278-3a25-4665-b673-aed626240ae3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.873734 kubelet[3586]: E0625 18:22:02.873526 3586 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.877540 kubelet[3586]: E0625 18:22:02.874065 3586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-nr5tn" Jun 25 18:22:02.877540 kubelet[3586]: E0625 18:22:02.874194 3586 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-nr5tn" Jun 25 18:22:02.877540 kubelet[3586]: E0625 18:22:02.874361 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-nr5tn_kube-system(01707278-3a25-4665-b673-aed626240ae3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-nr5tn_kube-system(01707278-3a25-4665-b673-aed626240ae3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-nr5tn" podUID="01707278-3a25-4665-b673-aed626240ae3" Jun 25 18:22:02.890306 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e-shm.mount: Deactivated successfully. Jun 25 18:22:02.940675 containerd[2137]: time="2024-06-25T18:22:02.940407832Z" level=error msg="Failed to destroy network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.941639 containerd[2137]: time="2024-06-25T18:22:02.941359684Z" level=error msg="encountered an error cleaning up failed sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.941639 containerd[2137]: time="2024-06-25T18:22:02.941442844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f944cc9c7-m2v8x,Uid:495cd0df-7e05-4c75-b413-8145733a3fc6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.944919 kubelet[3586]: E0625 18:22:02.942071 3586 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:02.944919 kubelet[3586]: E0625 18:22:02.942166 3586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" Jun 25 18:22:02.944919 kubelet[3586]: E0625 18:22:02.942204 3586 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" Jun 25 18:22:02.945622 kubelet[3586]: E0625 18:22:02.942283 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f944cc9c7-m2v8x_calico-system(495cd0df-7e05-4c75-b413-8145733a3fc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f944cc9c7-m2v8x_calico-system(495cd0df-7e05-4c75-b413-8145733a3fc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" podUID="495cd0df-7e05-4c75-b413-8145733a3fc6" Jun 25 18:22:02.950347 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0-shm.mount: Deactivated successfully. Jun 25 18:22:03.102725 containerd[2137]: time="2024-06-25T18:22:03.102296437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66nfl,Uid:ad1ae947-67f0-4071-a385-d1029b7ab538,Namespace:calico-system,Attempt:0,}" Jun 25 18:22:03.238522 containerd[2137]: time="2024-06-25T18:22:03.238350289Z" level=error msg="Failed to destroy network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.241824 containerd[2137]: time="2024-06-25T18:22:03.241696201Z" level=error msg="encountered an error cleaning up failed sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.242004 containerd[2137]: time="2024-06-25T18:22:03.241872661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66nfl,Uid:ad1ae947-67f0-4071-a385-d1029b7ab538,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.245327 kubelet[3586]: E0625 18:22:03.242273 3586 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.245327 kubelet[3586]: E0625 18:22:03.242351 3586 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-66nfl" Jun 25 18:22:03.245327 kubelet[3586]: E0625 18:22:03.242396 3586 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-66nfl" Jun 25 18:22:03.244640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e-shm.mount: Deactivated successfully. Jun 25 18:22:03.245801 kubelet[3586]: E0625 18:22:03.242517 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-66nfl_calico-system(ad1ae947-67f0-4071-a385-d1029b7ab538)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-66nfl_calico-system(ad1ae947-67f0-4071-a385-d1029b7ab538)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:22:03.276517 kubelet[3586]: I0625 18:22:03.274880 3586 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:22:03.466015 kubelet[3586]: I0625 18:22:03.465657 3586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:03.468382 containerd[2137]: time="2024-06-25T18:22:03.467855691Z" level=info msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" Jun 25 18:22:03.469203 containerd[2137]: time="2024-06-25T18:22:03.469063719Z" level=info msg="Ensure that sandbox 0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0 in task-service has been cleanup successfully" Jun 25 18:22:03.473811 kubelet[3586]: I0625 18:22:03.473744 3586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:03.477789 containerd[2137]: time="2024-06-25T18:22:03.477068967Z" level=info msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" Jun 25 18:22:03.477789 containerd[2137]: time="2024-06-25T18:22:03.477447663Z" level=info msg="Ensure that sandbox 61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e in task-service has been cleanup successfully" Jun 25 18:22:03.498832 containerd[2137]: time="2024-06-25T18:22:03.498754275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:22:03.503925 kubelet[3586]: I0625 18:22:03.502936 3586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:03.506690 containerd[2137]: time="2024-06-25T18:22:03.506607219Z" level=info msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" Jun 25 18:22:03.507536 containerd[2137]: time="2024-06-25T18:22:03.507407667Z" level=info msg="Ensure that sandbox 65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6 in task-service has been cleanup successfully" Jun 25 18:22:03.528495 kubelet[3586]: I0625 18:22:03.528325 3586 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:03.545695 containerd[2137]: time="2024-06-25T18:22:03.544749819Z" level=info msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" Jun 25 18:22:03.545695 containerd[2137]: time="2024-06-25T18:22:03.545183979Z" level=info msg="Ensure that sandbox e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e in task-service has been cleanup successfully" Jun 25 18:22:03.639783 containerd[2137]: time="2024-06-25T18:22:03.639688671Z" level=error msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" failed" error="failed to destroy network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.640540 kubelet[3586]: E0625 18:22:03.640441 3586 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:03.640728 kubelet[3586]: E0625 18:22:03.640580 3586 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e"} Jun 25 18:22:03.640728 kubelet[3586]: E0625 18:22:03.640650 3586 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"01707278-3a25-4665-b673-aed626240ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:22:03.640728 kubelet[3586]: E0625 18:22:03.640717 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"01707278-3a25-4665-b673-aed626240ae3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-nr5tn" podUID="01707278-3a25-4665-b673-aed626240ae3" Jun 25 18:22:03.657110 containerd[2137]: time="2024-06-25T18:22:03.657016383Z" level=error msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" failed" error="failed to destroy network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.657834 kubelet[3586]: E0625 18:22:03.657390 3586 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:03.657834 kubelet[3586]: E0625 18:22:03.657507 3586 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0"} Jun 25 18:22:03.657834 kubelet[3586]: E0625 18:22:03.657588 3586 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"495cd0df-7e05-4c75-b413-8145733a3fc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:22:03.657834 kubelet[3586]: E0625 18:22:03.657648 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"495cd0df-7e05-4c75-b413-8145733a3fc6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" podUID="495cd0df-7e05-4c75-b413-8145733a3fc6" Jun 25 18:22:03.669553 containerd[2137]: time="2024-06-25T18:22:03.669426688Z" level=error msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" failed" error="failed to destroy network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.669952 kubelet[3586]: E0625 18:22:03.669842 3586 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:03.669952 kubelet[3586]: E0625 18:22:03.669913 3586 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6"} Jun 25 18:22:03.669952 kubelet[3586]: E0625 18:22:03.669979 3586 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3061c83-3e80-4699-9cd5-cd32f59ce77c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:22:03.670421 kubelet[3586]: E0625 18:22:03.670181 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3061c83-3e80-4699-9cd5-cd32f59ce77c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-8cgx2" podUID="e3061c83-3e80-4699-9cd5-cd32f59ce77c" Jun 25 18:22:03.673604 containerd[2137]: time="2024-06-25T18:22:03.673436260Z" level=error msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" failed" error="failed to destroy network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:22:03.674260 kubelet[3586]: E0625 18:22:03.673942 3586 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:03.674260 kubelet[3586]: E0625 18:22:03.674009 3586 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e"} Jun 25 18:22:03.674260 kubelet[3586]: E0625 18:22:03.674077 3586 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad1ae947-67f0-4071-a385-d1029b7ab538\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:22:03.674260 kubelet[3586]: E0625 18:22:03.674129 3586 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad1ae947-67f0-4071-a385-d1029b7ab538\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-66nfl" podUID="ad1ae947-67f0-4071-a385-d1029b7ab538" Jun 25 18:22:12.997930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293394539.mount: Deactivated successfully. Jun 25 18:22:13.072698 containerd[2137]: time="2024-06-25T18:22:13.072611398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:13.074477 containerd[2137]: time="2024-06-25T18:22:13.074348710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 18:22:13.076606 containerd[2137]: time="2024-06-25T18:22:13.076432594Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:13.080901 containerd[2137]: time="2024-06-25T18:22:13.080780614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:13.082702 containerd[2137]: time="2024-06-25T18:22:13.082613902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 9.583777583s" Jun 25 18:22:13.082702 containerd[2137]: time="2024-06-25T18:22:13.082692634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 18:22:13.116441 containerd[2137]: time="2024-06-25T18:22:13.116370934Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:22:13.151971 containerd[2137]: time="2024-06-25T18:22:13.151886327Z" level=info msg="CreateContainer within sandbox \"c3343300f366f322a97b456cf6a3866903df26e0d0562c29c8bfdc84e1fc267f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ddd3c98f86624d45fca205119dc8e16cd4e05ddd2962610f411cb42377a93e41\"" Jun 25 18:22:13.156077 containerd[2137]: time="2024-06-25T18:22:13.154502123Z" level=info msg="StartContainer for \"ddd3c98f86624d45fca205119dc8e16cd4e05ddd2962610f411cb42377a93e41\"" Jun 25 18:22:13.275028 containerd[2137]: time="2024-06-25T18:22:13.273843047Z" level=info msg="StartContainer for \"ddd3c98f86624d45fca205119dc8e16cd4e05ddd2962610f411cb42377a93e41\" returns successfully" Jun 25 18:22:13.405537 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:22:13.405705 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:22:14.099999 containerd[2137]: time="2024-06-25T18:22:14.098847755Z" level=info msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" Jun 25 18:22:14.221282 kubelet[3586]: I0625 18:22:14.218782 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4x992" podStartSLOduration=3.529422836 podCreationTimestamp="2024-06-25 18:21:53 +0000 UTC" firstStartedPulling="2024-06-25 18:21:55.393707178 +0000 UTC m=+28.570333234" lastFinishedPulling="2024-06-25 18:22:13.082996894 +0000 UTC m=+46.259622950" observedRunningTime="2024-06-25 18:22:13.627514849 +0000 UTC m=+46.804141193" watchObservedRunningTime="2024-06-25 18:22:14.218712552 +0000 UTC m=+47.395338620" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.217 [INFO][4924] k8s.go 608: Cleaning up netns ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.218 [INFO][4924] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" iface="eth0" netns="/var/run/netns/cni-3b7bee05-2dfe-82b7-727b-a8327f8004fe" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.219 [INFO][4924] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" iface="eth0" netns="/var/run/netns/cni-3b7bee05-2dfe-82b7-727b-a8327f8004fe" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.220 [INFO][4924] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" iface="eth0" netns="/var/run/netns/cni-3b7bee05-2dfe-82b7-727b-a8327f8004fe" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.220 [INFO][4924] k8s.go 615: Releasing IP address(es) ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.222 [INFO][4924] utils.go 188: Calico CNI releasing IP address ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.269 [INFO][4934] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.269 [INFO][4934] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.270 [INFO][4934] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.284 [WARNING][4934] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.284 [INFO][4934] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.289 [INFO][4934] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:14.296011 containerd[2137]: 2024-06-25 18:22:14.292 [INFO][4924] k8s.go 621: Teardown processing complete. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:14.302099 containerd[2137]: time="2024-06-25T18:22:14.297634068Z" level=info msg="TearDown network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" successfully" Jun 25 18:22:14.302099 containerd[2137]: time="2024-06-25T18:22:14.297686016Z" level=info msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" returns successfully" Jun 25 18:22:14.302099 containerd[2137]: time="2024-06-25T18:22:14.299923068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66nfl,Uid:ad1ae947-67f0-4071-a385-d1029b7ab538,Namespace:calico-system,Attempt:1,}" Jun 25 18:22:14.302113 systemd[1]: run-netns-cni\x2d3b7bee05\x2d2dfe\x2d82b7\x2d727b\x2da8327f8004fe.mount: Deactivated successfully. Jun 25 18:22:14.536218 systemd-networkd[1695]: cali358c61e966a: Link UP Jun 25 18:22:14.538293 systemd-networkd[1695]: cali358c61e966a: Gained carrier Jun 25 18:22:14.538651 (udev-worker)[4871]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.376 [INFO][4941] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.402 [INFO][4941] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0 csi-node-driver- calico-system ad1ae947-67f0-4071-a385-d1029b7ab538 803 0 2024-06-25 18:21:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-30-218 csi-node-driver-66nfl eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali358c61e966a [] []}} ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.402 [INFO][4941] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.454 [INFO][4952] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" HandleID="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.475 [INFO][4952] ipam_plugin.go 264: Auto assigning IP ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" HandleID="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400023fc90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-218", "pod":"csi-node-driver-66nfl", "timestamp":"2024-06-25 18:22:14.454145377 +0000 UTC"}, Hostname:"ip-172-31-30-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.476 [INFO][4952] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.476 [INFO][4952] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.476 [INFO][4952] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-218' Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.479 [INFO][4952] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.486 [INFO][4952] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.494 [INFO][4952] ipam.go 489: Trying affinity for 192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.497 [INFO][4952] ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.501 [INFO][4952] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.501 [INFO][4952] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.504 [INFO][4952] ipam.go 1685: Creating new handle: k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.510 [INFO][4952] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.519 [INFO][4952] ipam.go 1216: Successfully claimed IPs: [192.168.87.193/26] block=192.168.87.192/26 handle="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.519 [INFO][4952] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.193/26] handle="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" host="ip-172-31-30-218" Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.519 [INFO][4952] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:14.566941 containerd[2137]: 2024-06-25 18:22:14.520 [INFO][4952] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.87.193/26] IPv6=[] ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" HandleID="k8s-pod-network.5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.525 [INFO][4941] k8s.go 386: Populated endpoint ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1ae947-67f0-4071-a385-d1029b7ab538", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"", Pod:"csi-node-driver-66nfl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali358c61e966a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.526 [INFO][4941] k8s.go 387: Calico CNI using IPs: [192.168.87.193/32] ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.526 [INFO][4941] dataplane_linux.go 68: Setting the host side veth name to cali358c61e966a ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.537 [INFO][4941] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.538 [INFO][4941] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1ae947-67f0-4071-a385-d1029b7ab538", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf", Pod:"csi-node-driver-66nfl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali358c61e966a", MAC:"0a:1a:cb:d3:da:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:14.569785 containerd[2137]: 2024-06-25 18:22:14.560 [INFO][4941] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf" Namespace="calico-system" Pod="csi-node-driver-66nfl" WorkloadEndpoint="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:14.634143 containerd[2137]: time="2024-06-25T18:22:14.633153518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:22:14.634143 containerd[2137]: time="2024-06-25T18:22:14.633284282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:14.634143 containerd[2137]: time="2024-06-25T18:22:14.633330434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:22:14.634143 containerd[2137]: time="2024-06-25T18:22:14.633366050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:14.747117 containerd[2137]: time="2024-06-25T18:22:14.746229483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-66nfl,Uid:ad1ae947-67f0-4071-a385-d1029b7ab538,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf\"" Jun 25 18:22:14.752507 containerd[2137]: time="2024-06-25T18:22:14.750132027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:22:15.103330 containerd[2137]: time="2024-06-25T18:22:15.102178116Z" level=info msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.236 [INFO][5044] k8s.go 608: Cleaning up netns ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.236 [INFO][5044] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" iface="eth0" netns="/var/run/netns/cni-e22513e4-3a18-7c91-8b57-6276eb0ddb2c" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.237 [INFO][5044] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" iface="eth0" netns="/var/run/netns/cni-e22513e4-3a18-7c91-8b57-6276eb0ddb2c" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.238 [INFO][5044] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" iface="eth0" netns="/var/run/netns/cni-e22513e4-3a18-7c91-8b57-6276eb0ddb2c" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.240 [INFO][5044] k8s.go 615: Releasing IP address(es) ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.240 [INFO][5044] utils.go 188: Calico CNI releasing IP address ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.293 [INFO][5074] ipam_plugin.go 411: Releasing address using handleID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.293 [INFO][5074] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.294 [INFO][5074] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.309 [WARNING][5074] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.309 [INFO][5074] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.316 [INFO][5074] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:15.327997 containerd[2137]: 2024-06-25 18:22:15.324 [INFO][5044] k8s.go 621: Teardown processing complete. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:15.331998 containerd[2137]: time="2024-06-25T18:22:15.330786205Z" level=info msg="TearDown network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" successfully" Jun 25 18:22:15.331998 containerd[2137]: time="2024-06-25T18:22:15.330839701Z" level=info msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" returns successfully" Jun 25 18:22:15.335228 containerd[2137]: time="2024-06-25T18:22:15.334474933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8cgx2,Uid:e3061c83-3e80-4699-9cd5-cd32f59ce77c,Namespace:kube-system,Attempt:1,}" Jun 25 18:22:15.353762 systemd[1]: run-netns-cni\x2de22513e4\x2d3a18\x2d7c91\x2d8b57\x2d6276eb0ddb2c.mount: Deactivated successfully. Jun 25 18:22:15.967887 systemd[1]: Started sshd@7-172.31.30.218:22-139.178.89.65:52080.service - OpenSSH per-connection server daemon (139.178.89.65:52080). Jun 25 18:22:16.003320 systemd-networkd[1695]: calibc6b5f31896: Link UP Jun 25 18:22:16.004796 systemd-networkd[1695]: calibc6b5f31896: Gained carrier Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.716 [INFO][5129] utils.go 100: File /var/lib/calico/mtu does not exist Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.761 [INFO][5129] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0 coredns-5dd5756b68- kube-system e3061c83-3e80-4699-9cd5-cd32f59ce77c 813 0 2024-06-25 18:21:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-218 coredns-5dd5756b68-8cgx2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibc6b5f31896 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.761 [INFO][5129] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.852 [INFO][5153] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" HandleID="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.884 [INFO][5153] ipam_plugin.go 264: Auto assigning IP ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" HandleID="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034eb10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-218", "pod":"coredns-5dd5756b68-8cgx2", "timestamp":"2024-06-25 18:22:15.852814696 +0000 UTC"}, Hostname:"ip-172-31-30-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.884 [INFO][5153] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.885 [INFO][5153] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.885 [INFO][5153] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-218' Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.889 [INFO][5153] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.897 [INFO][5153] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.907 [INFO][5153] ipam.go 489: Trying affinity for 192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.911 [INFO][5153] ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.917 [INFO][5153] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.919 [INFO][5153] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.922 [INFO][5153] ipam.go 1685: Creating new handle: k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.947 [INFO][5153] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.975 [INFO][5153] ipam.go 1216: Successfully claimed IPs: [192.168.87.194/26] block=192.168.87.192/26 handle="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.977 [INFO][5153] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.194/26] handle="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" host="ip-172-31-30-218" Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.977 [INFO][5153] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:16.050550 containerd[2137]: 2024-06-25 18:22:15.977 [INFO][5153] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.87.194/26] IPv6=[] ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" HandleID="k8s-pod-network.cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:15.996 [INFO][5129] k8s.go 386: Populated endpoint ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e3061c83-3e80-4699-9cd5-cd32f59ce77c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"", Pod:"coredns-5dd5756b68-8cgx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6b5f31896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:15.997 [INFO][5129] k8s.go 387: Calico CNI using IPs: [192.168.87.194/32] ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:15.997 [INFO][5129] dataplane_linux.go 68: Setting the host side veth name to calibc6b5f31896 ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:16.004 [INFO][5129] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:16.005 [INFO][5129] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e3061c83-3e80-4699-9cd5-cd32f59ce77c", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c", Pod:"coredns-5dd5756b68-8cgx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6b5f31896", MAC:"52:1b:f0:6e:b4:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:16.064447 containerd[2137]: 2024-06-25 18:22:16.036 [INFO][5129] k8s.go 500: Wrote updated endpoint to datastore ContainerID="cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c" Namespace="kube-system" Pod="coredns-5dd5756b68-8cgx2" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:16.105726 systemd-networkd[1695]: cali358c61e966a: Gained IPv6LL Jun 25 18:22:16.162482 containerd[2137]: time="2024-06-25T18:22:16.161887586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:22:16.169638 containerd[2137]: time="2024-06-25T18:22:16.168846206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:16.169638 containerd[2137]: time="2024-06-25T18:22:16.168913838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:22:16.169638 containerd[2137]: time="2024-06-25T18:22:16.168944750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:16.367219 sshd[5161]: Accepted publickey for core from 139.178.89.65 port 52080 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:16.370403 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:16.394300 systemd-logind[2117]: New session 8 of user core. Jun 25 18:22:16.406382 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:22:16.526755 containerd[2137]: time="2024-06-25T18:22:16.526291023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8cgx2,Uid:e3061c83-3e80-4699-9cd5-cd32f59ce77c,Namespace:kube-system,Attempt:1,} returns sandbox id \"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c\"" Jun 25 18:22:16.550565 containerd[2137]: time="2024-06-25T18:22:16.548710131Z" level=info msg="CreateContainer within sandbox \"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:22:16.648596 containerd[2137]: time="2024-06-25T18:22:16.647750572Z" level=info msg="CreateContainer within sandbox \"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d397db76ae70a33a2fe5b69dd4a0be76d3ea9ea78db93fec6ae430bb63232644\"" Jun 25 18:22:16.659488 containerd[2137]: time="2024-06-25T18:22:16.651762808Z" level=info msg="StartContainer for \"d397db76ae70a33a2fe5b69dd4a0be76d3ea9ea78db93fec6ae430bb63232644\"" Jun 25 18:22:16.662315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848245653.mount: Deactivated successfully. Jun 25 18:22:16.915380 sshd[5161]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:16.940560 systemd[1]: sshd@7-172.31.30.218:22-139.178.89.65:52080.service: Deactivated successfully. Jun 25 18:22:16.957065 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:22:16.959607 systemd-logind[2117]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:22:16.969217 systemd-logind[2117]: Removed session 8. Jun 25 18:22:16.976541 containerd[2137]: time="2024-06-25T18:22:16.975251490Z" level=info msg="StartContainer for \"d397db76ae70a33a2fe5b69dd4a0be76d3ea9ea78db93fec6ae430bb63232644\" returns successfully" Jun 25 18:22:17.100809 containerd[2137]: time="2024-06-25T18:22:17.098933150Z" level=info msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" Jun 25 18:22:17.129818 systemd-networkd[1695]: calibc6b5f31896: Gained IPv6LL Jun 25 18:22:17.320818 systemd-networkd[1695]: vxlan.calico: Link UP Jun 25 18:22:17.320834 systemd-networkd[1695]: vxlan.calico: Gained carrier Jun 25 18:22:17.328085 (udev-worker)[4870]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.267 [INFO][5329] k8s.go 608: Cleaning up netns ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.268 [INFO][5329] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" iface="eth0" netns="/var/run/netns/cni-c13f4f03-6b1e-3cce-eeac-cec5b3a08bbe" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.270 [INFO][5329] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" iface="eth0" netns="/var/run/netns/cni-c13f4f03-6b1e-3cce-eeac-cec5b3a08bbe" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.271 [INFO][5329] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" iface="eth0" netns="/var/run/netns/cni-c13f4f03-6b1e-3cce-eeac-cec5b3a08bbe" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.271 [INFO][5329] k8s.go 615: Releasing IP address(es) ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.271 [INFO][5329] utils.go 188: Calico CNI releasing IP address ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.379 [INFO][5337] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.379 [INFO][5337] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.379 [INFO][5337] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.441 [WARNING][5337] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.443 [INFO][5337] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.456 [INFO][5337] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:17.478846 containerd[2137]: 2024-06-25 18:22:17.470 [INFO][5329] k8s.go 621: Teardown processing complete. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:17.482872 containerd[2137]: time="2024-06-25T18:22:17.480299728Z" level=info msg="TearDown network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" successfully" Jun 25 18:22:17.482872 containerd[2137]: time="2024-06-25T18:22:17.480348148Z" level=info msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" returns successfully" Jun 25 18:22:17.489393 systemd[1]: run-netns-cni\x2dc13f4f03\x2d6b1e\x2d3cce\x2deeac\x2dcec5b3a08bbe.mount: Deactivated successfully. Jun 25 18:22:17.499906 containerd[2137]: time="2024-06-25T18:22:17.496806724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nr5tn,Uid:01707278-3a25-4665-b673-aed626240ae3,Namespace:kube-system,Attempt:1,}" Jun 25 18:22:17.667336 kubelet[3586]: I0625 18:22:17.664606 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8cgx2" podStartSLOduration=37.663779153 podCreationTimestamp="2024-06-25 18:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:22:17.649060613 +0000 UTC m=+50.825686669" watchObservedRunningTime="2024-06-25 18:22:17.663779153 +0000 UTC m=+50.840405233" Jun 25 18:22:17.932882 (udev-worker)[5364]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:22:17.935649 systemd-networkd[1695]: cali28983b37e73: Link UP Jun 25 18:22:17.948106 systemd-networkd[1695]: cali28983b37e73: Gained carrier Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.690 [INFO][5379] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0 coredns-5dd5756b68- kube-system 01707278-3a25-4665-b673-aed626240ae3 851 0 2024-06-25 18:21:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-218 coredns-5dd5756b68-nr5tn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28983b37e73 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.690 [INFO][5379] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.856 [INFO][5409] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" HandleID="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.881 [INFO][5409] ipam_plugin.go 264: Auto assigning IP ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" HandleID="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bad30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-218", "pod":"coredns-5dd5756b68-nr5tn", "timestamp":"2024-06-25 18:22:17.856368822 +0000 UTC"}, Hostname:"ip-172-31-30-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.882 [INFO][5409] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.882 [INFO][5409] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.882 [INFO][5409] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-218' Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.885 [INFO][5409] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.892 [INFO][5409] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.900 [INFO][5409] ipam.go 489: Trying affinity for 192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.903 [INFO][5409] ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.908 [INFO][5409] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.908 [INFO][5409] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.911 [INFO][5409] ipam.go 1685: Creating new handle: k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128 Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.917 [INFO][5409] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.926 [INFO][5409] ipam.go 1216: Successfully claimed IPs: [192.168.87.195/26] block=192.168.87.192/26 handle="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.926 [INFO][5409] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.195/26] handle="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" host="ip-172-31-30-218" Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.926 [INFO][5409] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:17.979546 containerd[2137]: 2024-06-25 18:22:17.926 [INFO][5409] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.87.195/26] IPv6=[] ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" HandleID="k8s-pod-network.dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.930 [INFO][5379] k8s.go 386: Populated endpoint ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"01707278-3a25-4665-b673-aed626240ae3", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"", Pod:"coredns-5dd5756b68-nr5tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28983b37e73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.930 [INFO][5379] k8s.go 387: Calico CNI using IPs: [192.168.87.195/32] ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.930 [INFO][5379] dataplane_linux.go 68: Setting the host side veth name to cali28983b37e73 ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.948 [INFO][5379] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.950 [INFO][5379] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"01707278-3a25-4665-b673-aed626240ae3", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128", Pod:"coredns-5dd5756b68-nr5tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28983b37e73", MAC:"82:09:56:bf:4f:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:17.982009 containerd[2137]: 2024-06-25 18:22:17.976 [INFO][5379] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128" Namespace="kube-system" Pod="coredns-5dd5756b68-nr5tn" WorkloadEndpoint="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:18.039956 containerd[2137]: time="2024-06-25T18:22:18.039367551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:22:18.039956 containerd[2137]: time="2024-06-25T18:22:18.039565443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:18.039956 containerd[2137]: time="2024-06-25T18:22:18.039639423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:22:18.039956 containerd[2137]: time="2024-06-25T18:22:18.039687327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:18.172228 containerd[2137]: time="2024-06-25T18:22:18.172165072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-nr5tn,Uid:01707278-3a25-4665-b673-aed626240ae3,Namespace:kube-system,Attempt:1,} returns sandbox id \"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128\"" Jun 25 18:22:18.200025 containerd[2137]: time="2024-06-25T18:22:18.199594216Z" level=info msg="CreateContainer within sandbox \"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:22:18.230512 containerd[2137]: time="2024-06-25T18:22:18.228855508Z" level=info msg="CreateContainer within sandbox \"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce0db3fc76d6b26f63ac81b79304d3836f855220fa04f0f62c51a60c3d73a983\"" Jun 25 18:22:18.234304 containerd[2137]: time="2024-06-25T18:22:18.232673356Z" level=info msg="StartContainer for \"ce0db3fc76d6b26f63ac81b79304d3836f855220fa04f0f62c51a60c3d73a983\"" Jun 25 18:22:18.358938 containerd[2137]: time="2024-06-25T18:22:18.358843588Z" level=info msg="StartContainer for \"ce0db3fc76d6b26f63ac81b79304d3836f855220fa04f0f62c51a60c3d73a983\" returns successfully" Jun 25 18:22:18.650956 kubelet[3586]: I0625 18:22:18.650887 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-nr5tn" podStartSLOduration=38.649626702 podCreationTimestamp="2024-06-25 18:21:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:22:18.648682818 +0000 UTC m=+51.825308898" watchObservedRunningTime="2024-06-25 18:22:18.649626702 +0000 UTC m=+51.826253022" Jun 25 18:22:19.098619 containerd[2137]: time="2024-06-25T18:22:19.098383696Z" level=info msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" Jun 25 18:22:19.241624 systemd-networkd[1695]: vxlan.calico: Gained IPv6LL Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.199 [INFO][5535] k8s.go 608: Cleaning up netns ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.199 [INFO][5535] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" iface="eth0" netns="/var/run/netns/cni-0d16f935-6938-d0e8-4761-bda225d69a3c" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.199 [INFO][5535] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" iface="eth0" netns="/var/run/netns/cni-0d16f935-6938-d0e8-4761-bda225d69a3c" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.200 [INFO][5535] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" iface="eth0" netns="/var/run/netns/cni-0d16f935-6938-d0e8-4761-bda225d69a3c" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.200 [INFO][5535] k8s.go 615: Releasing IP address(es) ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.200 [INFO][5535] utils.go 188: Calico CNI releasing IP address ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.236 [INFO][5541] ipam_plugin.go 411: Releasing address using handleID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.237 [INFO][5541] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.237 [INFO][5541] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.253 [WARNING][5541] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.253 [INFO][5541] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.255 [INFO][5541] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:19.261559 containerd[2137]: 2024-06-25 18:22:19.258 [INFO][5535] k8s.go 621: Teardown processing complete. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:19.265934 containerd[2137]: time="2024-06-25T18:22:19.263560109Z" level=info msg="TearDown network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" successfully" Jun 25 18:22:19.265934 containerd[2137]: time="2024-06-25T18:22:19.263606513Z" level=info msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" returns successfully" Jun 25 18:22:19.265934 containerd[2137]: time="2024-06-25T18:22:19.264839249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f944cc9c7-m2v8x,Uid:495cd0df-7e05-4c75-b413-8145733a3fc6,Namespace:calico-system,Attempt:1,}" Jun 25 18:22:19.272590 systemd[1]: run-netns-cni\x2d0d16f935\x2d6938\x2dd0e8\x2d4761\x2dbda225d69a3c.mount: Deactivated successfully. Jun 25 18:22:19.559205 systemd-networkd[1695]: cali6f3a099e389: Link UP Jun 25 18:22:19.561880 systemd-networkd[1695]: cali6f3a099e389: Gained carrier Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.374 [INFO][5547] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0 calico-kube-controllers-7f944cc9c7- calico-system 495cd0df-7e05-4c75-b413-8145733a3fc6 882 0 2024-06-25 18:21:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f944cc9c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-218 calico-kube-controllers-7f944cc9c7-m2v8x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6f3a099e389 [] []}} ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.374 [INFO][5547] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.463 [INFO][5558] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" HandleID="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.487 [INFO][5558] ipam_plugin.go 264: Auto assigning IP ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" HandleID="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000116940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-218", "pod":"calico-kube-controllers-7f944cc9c7-m2v8x", "timestamp":"2024-06-25 18:22:19.463789254 +0000 UTC"}, Hostname:"ip-172-31-30-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.487 [INFO][5558] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.487 [INFO][5558] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.489 [INFO][5558] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-218' Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.493 [INFO][5558] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.501 [INFO][5558] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.509 [INFO][5558] ipam.go 489: Trying affinity for 192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.512 [INFO][5558] ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.516 [INFO][5558] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.517 [INFO][5558] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.520 [INFO][5558] ipam.go 1685: Creating new handle: k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370 Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.531 [INFO][5558] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.547 [INFO][5558] ipam.go 1216: Successfully claimed IPs: [192.168.87.196/26] block=192.168.87.192/26 handle="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.547 [INFO][5558] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.196/26] handle="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" host="ip-172-31-30-218" Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.547 [INFO][5558] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:19.602186 containerd[2137]: 2024-06-25 18:22:19.547 [INFO][5558] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.87.196/26] IPv6=[] ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" HandleID="k8s-pod-network.9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.551 [INFO][5547] k8s.go 386: Populated endpoint ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0", GenerateName:"calico-kube-controllers-7f944cc9c7-", Namespace:"calico-system", SelfLink:"", UID:"495cd0df-7e05-4c75-b413-8145733a3fc6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f944cc9c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"", Pod:"calico-kube-controllers-7f944cc9c7-m2v8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f3a099e389", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.551 [INFO][5547] k8s.go 387: Calico CNI using IPs: [192.168.87.196/32] ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.551 [INFO][5547] dataplane_linux.go 68: Setting the host side veth name to cali6f3a099e389 ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.564 [INFO][5547] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.566 [INFO][5547] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0", GenerateName:"calico-kube-controllers-7f944cc9c7-", Namespace:"calico-system", SelfLink:"", UID:"495cd0df-7e05-4c75-b413-8145733a3fc6", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f944cc9c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370", Pod:"calico-kube-controllers-7f944cc9c7-m2v8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f3a099e389", MAC:"0e:8b:f7:ef:d1:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:19.610823 containerd[2137]: 2024-06-25 18:22:19.595 [INFO][5547] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370" Namespace="calico-system" Pod="calico-kube-controllers-7f944cc9c7-m2v8x" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:19.625959 systemd-networkd[1695]: cali28983b37e73: Gained IPv6LL Jun 25 18:22:19.671344 containerd[2137]: time="2024-06-25T18:22:19.671159731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:22:19.672477 containerd[2137]: time="2024-06-25T18:22:19.671415367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:19.672477 containerd[2137]: time="2024-06-25T18:22:19.672001915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:22:19.672477 containerd[2137]: time="2024-06-25T18:22:19.672121927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:22:19.791439 containerd[2137]: time="2024-06-25T18:22:19.791380064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f944cc9c7-m2v8x,Uid:495cd0df-7e05-4c75-b413-8145733a3fc6,Namespace:calico-system,Attempt:1,} returns sandbox id \"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370\"" Jun 25 18:22:20.715564 systemd-networkd[1695]: cali6f3a099e389: Gained IPv6LL Jun 25 18:22:21.952199 systemd[1]: Started sshd@8-172.31.30.218:22-139.178.89.65:43294.service - OpenSSH per-connection server daemon (139.178.89.65:43294). Jun 25 18:22:22.149751 sshd[5619]: Accepted publickey for core from 139.178.89.65 port 43294 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:22.151721 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:22.167417 systemd-logind[2117]: New session 9 of user core. Jun 25 18:22:22.176493 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:22:22.527099 sshd[5619]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:22.535909 systemd[1]: sshd@8-172.31.30.218:22-139.178.89.65:43294.service: Deactivated successfully. Jun 25 18:22:22.543940 systemd-logind[2117]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:22:22.545312 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:22:22.550527 systemd-logind[2117]: Removed session 9. Jun 25 18:22:23.549955 ntpd[2097]: Listen normally on 6 vxlan.calico 192.168.87.192:123 Jun 25 18:22:23.550254 ntpd[2097]: Listen normally on 7 cali358c61e966a [fe80::ecee:eeff:feee:eeee%4]:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 6 vxlan.calico 192.168.87.192:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 7 cali358c61e966a [fe80::ecee:eeff:feee:eeee%4]:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 8 calibc6b5f31896 [fe80::ecee:eeff:feee:eeee%5]:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 9 vxlan.calico [fe80::6426:8fff:fe9e:7d8f%6]:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 10 cali28983b37e73 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:22:23.550674 ntpd[2097]: 25 Jun 18:22:23 ntpd[2097]: Listen normally on 11 cali6f3a099e389 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:22:23.550381 ntpd[2097]: Listen normally on 8 calibc6b5f31896 [fe80::ecee:eeff:feee:eeee%5]:123 Jun 25 18:22:23.550482 ntpd[2097]: Listen normally on 9 vxlan.calico [fe80::6426:8fff:fe9e:7d8f%6]:123 Jun 25 18:22:23.550566 ntpd[2097]: Listen normally on 10 cali28983b37e73 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:22:23.550645 ntpd[2097]: Listen normally on 11 cali6f3a099e389 [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:22:25.852532 containerd[2137]: time="2024-06-25T18:22:25.852285374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:25.854945 containerd[2137]: time="2024-06-25T18:22:25.853006766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 18:22:25.857635 containerd[2137]: time="2024-06-25T18:22:25.857433698Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:25.869239 containerd[2137]: time="2024-06-25T18:22:25.868973450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:25.875304 containerd[2137]: time="2024-06-25T18:22:25.875020262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 11.124805507s" Jun 25 18:22:25.875304 containerd[2137]: time="2024-06-25T18:22:25.875089130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 18:22:25.882973 containerd[2137]: time="2024-06-25T18:22:25.882672758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:22:25.895448 containerd[2137]: time="2024-06-25T18:22:25.894863762Z" level=info msg="CreateContainer within sandbox \"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:22:25.976108 containerd[2137]: time="2024-06-25T18:22:25.975081590Z" level=info msg="CreateContainer within sandbox \"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"82c14cbd303f4492467f158557e601d4b89192cef80b01cf8bbf71e3d6232c93\"" Jun 25 18:22:25.982652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount526386828.mount: Deactivated successfully. Jun 25 18:22:25.984351 containerd[2137]: time="2024-06-25T18:22:25.982818446Z" level=info msg="StartContainer for \"82c14cbd303f4492467f158557e601d4b89192cef80b01cf8bbf71e3d6232c93\"" Jun 25 18:22:26.114813 systemd[1]: run-containerd-runc-k8s.io-82c14cbd303f4492467f158557e601d4b89192cef80b01cf8bbf71e3d6232c93-runc.mPaG9n.mount: Deactivated successfully. Jun 25 18:22:26.377637 containerd[2137]: time="2024-06-25T18:22:26.377115012Z" level=info msg="StartContainer for \"82c14cbd303f4492467f158557e601d4b89192cef80b01cf8bbf71e3d6232c93\" returns successfully" Jun 25 18:22:27.151276 containerd[2137]: time="2024-06-25T18:22:27.151207176Z" level=info msg="StopPodSandbox for \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\"" Jun 25 18:22:27.152419 containerd[2137]: time="2024-06-25T18:22:27.151551624Z" level=info msg="TearDown network for sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" successfully" Jun 25 18:22:27.152419 containerd[2137]: time="2024-06-25T18:22:27.151782012Z" level=info msg="StopPodSandbox for \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" returns successfully" Jun 25 18:22:27.153946 containerd[2137]: time="2024-06-25T18:22:27.153309672Z" level=info msg="RemovePodSandbox for \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\"" Jun 25 18:22:27.153946 containerd[2137]: time="2024-06-25T18:22:27.153386904Z" level=info msg="Forcibly stopping sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\"" Jun 25 18:22:27.153946 containerd[2137]: time="2024-06-25T18:22:27.153645588Z" level=info msg="TearDown network for sandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" successfully" Jun 25 18:22:27.172367 containerd[2137]: time="2024-06-25T18:22:27.172258692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:27.173012 containerd[2137]: time="2024-06-25T18:22:27.172865136Z" level=info msg="RemovePodSandbox \"1ca3b673433f0404f476326ce2c1eac4913b5cf6a1d2929d7fed6da1ee7dd6c5\" returns successfully" Jun 25 18:22:27.197121 containerd[2137]: time="2024-06-25T18:22:27.197031684Z" level=info msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" Jun 25 18:22:27.575030 systemd[1]: Started sshd@9-172.31.30.218:22-139.178.89.65:45138.service - OpenSSH per-connection server daemon (139.178.89.65:45138). Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.423 [WARNING][5719] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e3061c83-3e80-4699-9cd5-cd32f59ce77c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c", Pod:"coredns-5dd5756b68-8cgx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6b5f31896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.424 [INFO][5719] k8s.go 608: Cleaning up netns ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.424 [INFO][5719] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" iface="eth0" netns="" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.425 [INFO][5719] k8s.go 615: Releasing IP address(es) ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.425 [INFO][5719] utils.go 188: Calico CNI releasing IP address ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.557 [INFO][5725] ipam_plugin.go 411: Releasing address using handleID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.558 [INFO][5725] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.559 [INFO][5725] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.602 [WARNING][5725] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.602 [INFO][5725] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.626 [INFO][5725] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:27.671583 containerd[2137]: 2024-06-25 18:22:27.648 [INFO][5719] k8s.go 621: Teardown processing complete. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:27.675056 containerd[2137]: time="2024-06-25T18:22:27.671634087Z" level=info msg="TearDown network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" successfully" Jun 25 18:22:27.675056 containerd[2137]: time="2024-06-25T18:22:27.671677131Z" level=info msg="StopPodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" returns successfully" Jun 25 18:22:27.679887 containerd[2137]: time="2024-06-25T18:22:27.678316287Z" level=info msg="RemovePodSandbox for \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" Jun 25 18:22:27.679887 containerd[2137]: time="2024-06-25T18:22:27.678400155Z" level=info msg="Forcibly stopping sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\"" Jun 25 18:22:27.874179 sshd[5735]: Accepted publickey for core from 139.178.89.65 port 45138 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:27.882084 sshd[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:27.921978 systemd-logind[2117]: New session 10 of user core. Jun 25 18:22:27.930143 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.025 [WARNING][5749] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"e3061c83-3e80-4699-9cd5-cd32f59ce77c", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"cc79414f1f949c31195f500bfa4387b2af89a81c6568c4a1890f72a7f2e4cc2c", Pod:"coredns-5dd5756b68-8cgx2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibc6b5f31896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.036 [INFO][5749] k8s.go 608: Cleaning up netns ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.036 [INFO][5749] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" iface="eth0" netns="" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.036 [INFO][5749] k8s.go 615: Releasing IP address(es) ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.036 [INFO][5749] utils.go 188: Calico CNI releasing IP address ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.391 [INFO][5764] ipam_plugin.go 411: Releasing address using handleID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.391 [INFO][5764] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.391 [INFO][5764] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.429 [WARNING][5764] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.430 [INFO][5764] ipam_plugin.go 439: Releasing address using workloadID ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" HandleID="k8s-pod-network.65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--8cgx2-eth0" Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.438 [INFO][5764] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:28.457195 containerd[2137]: 2024-06-25 18:22:28.446 [INFO][5749] k8s.go 621: Teardown processing complete. ContainerID="65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6" Jun 25 18:22:28.457195 containerd[2137]: time="2024-06-25T18:22:28.456691131Z" level=info msg="TearDown network for sandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" successfully" Jun 25 18:22:28.471239 containerd[2137]: time="2024-06-25T18:22:28.468930543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:28.471239 containerd[2137]: time="2024-06-25T18:22:28.469034751Z" level=info msg="RemovePodSandbox \"65bd96b0a16e1e6084b4ebc7b1237f3c95b53a176499ed410b65f0d9c2f794a6\" returns successfully" Jun 25 18:22:28.471239 containerd[2137]: time="2024-06-25T18:22:28.470680671Z" level=info msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" Jun 25 18:22:28.504832 sshd[5735]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:28.516384 systemd[1]: sshd@9-172.31.30.218:22-139.178.89.65:45138.service: Deactivated successfully. Jun 25 18:22:28.528793 systemd-logind[2117]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:22:28.529336 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:22:28.563161 systemd[1]: Started sshd@10-172.31.30.218:22-139.178.89.65:45146.service - OpenSSH per-connection server daemon (139.178.89.65:45146). Jun 25 18:22:28.565935 systemd-logind[2117]: Removed session 10. Jun 25 18:22:28.776641 sshd[5803]: Accepted publickey for core from 139.178.89.65 port 45146 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:28.782545 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:28.797557 systemd-logind[2117]: New session 11 of user core. Jun 25 18:22:28.806054 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.689 [WARNING][5796] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1ae947-67f0-4071-a385-d1029b7ab538", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf", Pod:"csi-node-driver-66nfl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali358c61e966a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.689 [INFO][5796] k8s.go 608: Cleaning up netns ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.689 [INFO][5796] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" iface="eth0" netns="" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.689 [INFO][5796] k8s.go 615: Releasing IP address(es) ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.690 [INFO][5796] utils.go 188: Calico CNI releasing IP address ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.800 [INFO][5808] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.801 [INFO][5808] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.802 [INFO][5808] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.829 [WARNING][5808] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.829 [INFO][5808] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.835 [INFO][5808] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:28.842922 containerd[2137]: 2024-06-25 18:22:28.839 [INFO][5796] k8s.go 621: Teardown processing complete. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:28.845699 containerd[2137]: time="2024-06-25T18:22:28.843228641Z" level=info msg="TearDown network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" successfully" Jun 25 18:22:28.845699 containerd[2137]: time="2024-06-25T18:22:28.843274757Z" level=info msg="StopPodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" returns successfully" Jun 25 18:22:28.845699 containerd[2137]: time="2024-06-25T18:22:28.844049297Z" level=info msg="RemovePodSandbox for \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" Jun 25 18:22:28.845699 containerd[2137]: time="2024-06-25T18:22:28.844110257Z" level=info msg="Forcibly stopping sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\"" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:28.970 [WARNING][5829] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad1ae947-67f0-4071-a385-d1029b7ab538", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf", Pod:"csi-node-driver-66nfl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.87.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali358c61e966a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:28.978 [INFO][5829] k8s.go 608: Cleaning up netns ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:28.978 [INFO][5829] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" iface="eth0" netns="" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:28.978 [INFO][5829] k8s.go 615: Releasing IP address(es) ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:28.980 [INFO][5829] utils.go 188: Calico CNI releasing IP address ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.195 [INFO][5841] ipam_plugin.go 411: Releasing address using handleID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.196 [INFO][5841] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.196 [INFO][5841] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.225 [WARNING][5841] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.225 [INFO][5841] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" HandleID="k8s-pod-network.e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Workload="ip--172--31--30--218-k8s-csi--node--driver--66nfl-eth0" Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.236 [INFO][5841] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:29.268946 containerd[2137]: 2024-06-25 18:22:29.247 [INFO][5829] k8s.go 621: Teardown processing complete. ContainerID="e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e" Jun 25 18:22:29.268946 containerd[2137]: time="2024-06-25T18:22:29.268875687Z" level=info msg="TearDown network for sandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" successfully" Jun 25 18:22:29.299696 containerd[2137]: time="2024-06-25T18:22:29.299606559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:29.300748 containerd[2137]: time="2024-06-25T18:22:29.300685503Z" level=info msg="RemovePodSandbox \"e8627122a919844d22645dcf79e51e00625e331519d1f70ef84ef456d80cec3e\" returns successfully" Jun 25 18:22:29.304280 containerd[2137]: time="2024-06-25T18:22:29.303603567Z" level=info msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.481 [WARNING][5860] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0", GenerateName:"calico-kube-controllers-7f944cc9c7-", Namespace:"calico-system", SelfLink:"", UID:"495cd0df-7e05-4c75-b413-8145733a3fc6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f944cc9c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370", Pod:"calico-kube-controllers-7f944cc9c7-m2v8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f3a099e389", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.482 [INFO][5860] k8s.go 608: Cleaning up netns ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.482 [INFO][5860] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" iface="eth0" netns="" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.483 [INFO][5860] k8s.go 615: Releasing IP address(es) ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.483 [INFO][5860] utils.go 188: Calico CNI releasing IP address ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.565 [INFO][5866] ipam_plugin.go 411: Releasing address using handleID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.568 [INFO][5866] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.568 [INFO][5866] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.602 [WARNING][5866] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.603 [INFO][5866] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.608 [INFO][5866] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:29.635785 containerd[2137]: 2024-06-25 18:22:29.620 [INFO][5860] k8s.go 621: Teardown processing complete. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:29.635785 containerd[2137]: time="2024-06-25T18:22:29.635612801Z" level=info msg="TearDown network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" successfully" Jun 25 18:22:29.635785 containerd[2137]: time="2024-06-25T18:22:29.635660261Z" level=info msg="StopPodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" returns successfully" Jun 25 18:22:29.641515 containerd[2137]: time="2024-06-25T18:22:29.639380873Z" level=info msg="RemovePodSandbox for \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" Jun 25 18:22:29.642497 containerd[2137]: time="2024-06-25T18:22:29.640360661Z" level=info msg="Forcibly stopping sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\"" Jun 25 18:22:29.679348 systemd-journald[1613]: Under memory pressure, flushing caches. Jun 25 18:22:29.674195 systemd-resolved[2029]: Under memory pressure, flushing caches. Jun 25 18:22:29.674271 systemd-resolved[2029]: Flushed all caches. Jun 25 18:22:29.878744 sshd[5803]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:29.898632 systemd[1]: sshd@10-172.31.30.218:22-139.178.89.65:45146.service: Deactivated successfully. Jun 25 18:22:29.914612 systemd-logind[2117]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:22:29.922908 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:22:29.940900 systemd[1]: Started sshd@11-172.31.30.218:22-139.178.89.65:45158.service - OpenSSH per-connection server daemon (139.178.89.65:45158). Jun 25 18:22:29.947947 systemd-logind[2117]: Removed session 11. Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:29.986 [WARNING][5884] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0", GenerateName:"calico-kube-controllers-7f944cc9c7-", Namespace:"calico-system", SelfLink:"", UID:"495cd0df-7e05-4c75-b413-8145733a3fc6", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f944cc9c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370", Pod:"calico-kube-controllers-7f944cc9c7-m2v8x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6f3a099e389", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:29.992 [INFO][5884] k8s.go 608: Cleaning up netns ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:29.993 [INFO][5884] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" iface="eth0" netns="" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:29.993 [INFO][5884] k8s.go 615: Releasing IP address(es) ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:29.994 [INFO][5884] utils.go 188: Calico CNI releasing IP address ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.064 [INFO][5896] ipam_plugin.go 411: Releasing address using handleID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.064 [INFO][5896] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.064 [INFO][5896] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.093 [WARNING][5896] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.093 [INFO][5896] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" HandleID="k8s-pod-network.0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Workload="ip--172--31--30--218-k8s-calico--kube--controllers--7f944cc9c7--m2v8x-eth0" Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.100 [INFO][5896] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:30.115257 containerd[2137]: 2024-06-25 18:22:30.108 [INFO][5884] k8s.go 621: Teardown processing complete. ContainerID="0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0" Jun 25 18:22:30.116249 containerd[2137]: time="2024-06-25T18:22:30.115605411Z" level=info msg="TearDown network for sandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" successfully" Jun 25 18:22:30.124194 containerd[2137]: time="2024-06-25T18:22:30.123137703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:30.124194 containerd[2137]: time="2024-06-25T18:22:30.123239931Z" level=info msg="RemovePodSandbox \"0dfe2a83a1c9334730cbe15ee849917fec1ec2930f396b0c0dad552385314fc0\" returns successfully" Jun 25 18:22:30.124194 containerd[2137]: time="2024-06-25T18:22:30.124159095Z" level=info msg="StopPodSandbox for \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\"" Jun 25 18:22:30.124483 containerd[2137]: time="2024-06-25T18:22:30.124322175Z" level=info msg="TearDown network for sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" successfully" Jun 25 18:22:30.124483 containerd[2137]: time="2024-06-25T18:22:30.124390575Z" level=info msg="StopPodSandbox for \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" returns successfully" Jun 25 18:22:30.128802 containerd[2137]: time="2024-06-25T18:22:30.127076463Z" level=info msg="RemovePodSandbox for \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\"" Jun 25 18:22:30.128802 containerd[2137]: time="2024-06-25T18:22:30.127152255Z" level=info msg="Forcibly stopping sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\"" Jun 25 18:22:30.128802 containerd[2137]: time="2024-06-25T18:22:30.127323651Z" level=info msg="TearDown network for sandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" successfully" Jun 25 18:22:30.138849 containerd[2137]: time="2024-06-25T18:22:30.138755667Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:30.139033 containerd[2137]: time="2024-06-25T18:22:30.138921339Z" level=info msg="RemovePodSandbox \"f951cb5020bb90d4bb343fd853ceae234c0ce790a35a1c2ea9cc8c210703479c\" returns successfully" Jun 25 18:22:30.142410 containerd[2137]: time="2024-06-25T18:22:30.142282203Z" level=info msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" Jun 25 18:22:30.182732 sshd[5894]: Accepted publickey for core from 139.178.89.65 port 45158 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:30.187640 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:30.204387 systemd-logind[2117]: New session 12 of user core. Jun 25 18:22:30.212085 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.305 [WARNING][5915] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"01707278-3a25-4665-b673-aed626240ae3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128", Pod:"coredns-5dd5756b68-nr5tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28983b37e73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.307 [INFO][5915] k8s.go 608: Cleaning up netns ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.307 [INFO][5915] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" iface="eth0" netns="" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.307 [INFO][5915] k8s.go 615: Releasing IP address(es) ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.307 [INFO][5915] utils.go 188: Calico CNI releasing IP address ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.385 [INFO][5923] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.387 [INFO][5923] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.387 [INFO][5923] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.421 [WARNING][5923] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.421 [INFO][5923] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.428 [INFO][5923] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:30.449283 containerd[2137]: 2024-06-25 18:22:30.438 [INFO][5915] k8s.go 621: Teardown processing complete. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.452376 containerd[2137]: time="2024-06-25T18:22:30.451551449Z" level=info msg="TearDown network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" successfully" Jun 25 18:22:30.452376 containerd[2137]: time="2024-06-25T18:22:30.451615157Z" level=info msg="StopPodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" returns successfully" Jun 25 18:22:30.456302 containerd[2137]: time="2024-06-25T18:22:30.455707049Z" level=info msg="RemovePodSandbox for \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" Jun 25 18:22:30.456302 containerd[2137]: time="2024-06-25T18:22:30.455773793Z" level=info msg="Forcibly stopping sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\"" Jun 25 18:22:30.495520 containerd[2137]: time="2024-06-25T18:22:30.495292841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:30.500103 containerd[2137]: time="2024-06-25T18:22:30.500031461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 18:22:30.504502 containerd[2137]: time="2024-06-25T18:22:30.503701445Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:30.517879 containerd[2137]: time="2024-06-25T18:22:30.517813097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:30.523716 containerd[2137]: time="2024-06-25T18:22:30.523642829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 4.640886899s" Jun 25 18:22:30.524104 containerd[2137]: time="2024-06-25T18:22:30.524049713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 18:22:30.527513 containerd[2137]: time="2024-06-25T18:22:30.527197061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:22:30.621035 containerd[2137]: time="2024-06-25T18:22:30.620769281Z" level=info msg="CreateContainer within sandbox \"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:22:30.654830 sshd[5894]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:30.682657 containerd[2137]: time="2024-06-25T18:22:30.682065030Z" level=info msg="CreateContainer within sandbox \"9cd795d38c7e3f12cd2706b70ce71ea1c397abbd89313e5f8bb6b6811684f370\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cf0e541bcf57a54ec6736076435c244b4d865c1ea97fe00ac70fcbfb4f206bfd\"" Jun 25 18:22:30.694667 systemd-logind[2117]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:22:30.696248 systemd[1]: sshd@11-172.31.30.218:22-139.178.89.65:45158.service: Deactivated successfully. Jun 25 18:22:30.709519 containerd[2137]: time="2024-06-25T18:22:30.703998678Z" level=info msg="StartContainer for \"cf0e541bcf57a54ec6736076435c244b4d865c1ea97fe00ac70fcbfb4f206bfd\"" Jun 25 18:22:30.709283 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:22:30.717565 systemd-logind[2117]: Removed session 12. Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.731 [WARNING][5948] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"01707278-3a25-4665-b673-aed626240ae3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 21, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"dfc63513a96eeb4702d7376c803f3644767df425c38b620e8b5cb0197b0d3128", Pod:"coredns-5dd5756b68-nr5tn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28983b37e73", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.746 [INFO][5948] k8s.go 608: Cleaning up netns ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.747 [INFO][5948] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" iface="eth0" netns="" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.754 [INFO][5948] k8s.go 615: Releasing IP address(es) ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.754 [INFO][5948] utils.go 188: Calico CNI releasing IP address ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.873 [INFO][5967] ipam_plugin.go 411: Releasing address using handleID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.873 [INFO][5967] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.873 [INFO][5967] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.890 [WARNING][5967] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.890 [INFO][5967] ipam_plugin.go 439: Releasing address using workloadID ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" HandleID="k8s-pod-network.61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Workload="ip--172--31--30--218-k8s-coredns--5dd5756b68--nr5tn-eth0" Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.893 [INFO][5967] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:22:30.908043 containerd[2137]: 2024-06-25 18:22:30.897 [INFO][5948] k8s.go 621: Teardown processing complete. ContainerID="61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e" Jun 25 18:22:30.910478 containerd[2137]: time="2024-06-25T18:22:30.909678163Z" level=info msg="TearDown network for sandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" successfully" Jun 25 18:22:30.917767 containerd[2137]: time="2024-06-25T18:22:30.917636587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:22:30.918105 containerd[2137]: time="2024-06-25T18:22:30.918057451Z" level=info msg="RemovePodSandbox \"61b196cafb2c5f7e34521140e01a9d130a5db5d81d749f79597f19e97bf8184e\" returns successfully" Jun 25 18:22:31.057359 containerd[2137]: time="2024-06-25T18:22:31.057253420Z" level=info msg="StartContainer for \"cf0e541bcf57a54ec6736076435c244b4d865c1ea97fe00ac70fcbfb4f206bfd\" returns successfully" Jun 25 18:22:31.830633 kubelet[3586]: I0625 18:22:31.830440 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f944cc9c7-m2v8x" podStartSLOduration=30.094885014 podCreationTimestamp="2024-06-25 18:21:51 +0000 UTC" firstStartedPulling="2024-06-25 18:22:19.793854428 +0000 UTC m=+52.970480484" lastFinishedPulling="2024-06-25 18:22:30.525904745 +0000 UTC m=+63.702530885" observedRunningTime="2024-06-25 18:22:31.826716127 +0000 UTC m=+65.003342207" watchObservedRunningTime="2024-06-25 18:22:31.826935415 +0000 UTC m=+65.003561459" Jun 25 18:22:32.249875 containerd[2137]: time="2024-06-25T18:22:32.249591641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:32.251730 containerd[2137]: time="2024-06-25T18:22:32.251609741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 18:22:32.253863 containerd[2137]: time="2024-06-25T18:22:32.253684230Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:32.260574 containerd[2137]: time="2024-06-25T18:22:32.260384922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:22:32.262610 containerd[2137]: time="2024-06-25T18:22:32.262327374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.734484341s" Jun 25 18:22:32.262610 containerd[2137]: time="2024-06-25T18:22:32.262405050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 18:22:32.268022 containerd[2137]: time="2024-06-25T18:22:32.267734298Z" level=info msg="CreateContainer within sandbox \"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:22:32.305601 containerd[2137]: time="2024-06-25T18:22:32.304789650Z" level=info msg="CreateContainer within sandbox \"5f0671fccc5f8e5ae6c8d938b16aaa768eb190577b30f323033d49f3b0804abf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"921cd4645fd6f3324f5c7c6f7f580feae15b1a687e978ba5b57ee218754e50b2\"" Jun 25 18:22:32.306979 containerd[2137]: time="2024-06-25T18:22:32.306796938Z" level=info msg="StartContainer for \"921cd4645fd6f3324f5c7c6f7f580feae15b1a687e978ba5b57ee218754e50b2\"" Jun 25 18:22:32.456841 containerd[2137]: time="2024-06-25T18:22:32.456746167Z" level=info msg="StartContainer for \"921cd4645fd6f3324f5c7c6f7f580feae15b1a687e978ba5b57ee218754e50b2\" returns successfully" Jun 25 18:22:32.858653 kubelet[3586]: I0625 18:22:32.855720 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-66nfl" podStartSLOduration=26.341847781 podCreationTimestamp="2024-06-25 18:21:49 +0000 UTC" firstStartedPulling="2024-06-25 18:22:14.748953999 +0000 UTC m=+47.925580067" lastFinishedPulling="2024-06-25 18:22:32.262762158 +0000 UTC m=+65.439388214" observedRunningTime="2024-06-25 18:22:32.854939924 +0000 UTC m=+66.031565980" watchObservedRunningTime="2024-06-25 18:22:32.855655928 +0000 UTC m=+66.032281996" Jun 25 18:22:33.440825 kubelet[3586]: I0625 18:22:33.440750 3586 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:22:33.440825 kubelet[3586]: I0625 18:22:33.440826 3586 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:22:35.685192 systemd[1]: Started sshd@12-172.31.30.218:22-139.178.89.65:45172.service - OpenSSH per-connection server daemon (139.178.89.65:45172). Jun 25 18:22:35.884685 sshd[6081]: Accepted publickey for core from 139.178.89.65 port 45172 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:35.888392 sshd[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:35.903295 systemd-logind[2117]: New session 13 of user core. Jun 25 18:22:35.910072 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:22:36.237827 sshd[6081]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:36.245212 systemd[1]: sshd@12-172.31.30.218:22-139.178.89.65:45172.service: Deactivated successfully. Jun 25 18:22:36.258345 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:22:36.260589 systemd-logind[2117]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:22:36.263183 systemd-logind[2117]: Removed session 13. Jun 25 18:22:41.270669 systemd[1]: Started sshd@13-172.31.30.218:22-139.178.89.65:40642.service - OpenSSH per-connection server daemon (139.178.89.65:40642). Jun 25 18:22:41.446551 sshd[6110]: Accepted publickey for core from 139.178.89.65 port 40642 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:41.449168 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:41.457768 systemd-logind[2117]: New session 14 of user core. Jun 25 18:22:41.465020 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:22:41.720345 sshd[6110]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:41.725788 systemd[1]: sshd@13-172.31.30.218:22-139.178.89.65:40642.service: Deactivated successfully. Jun 25 18:22:41.735576 systemd-logind[2117]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:22:41.736331 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:22:41.739647 systemd-logind[2117]: Removed session 14. Jun 25 18:22:46.752952 systemd[1]: Started sshd@14-172.31.30.218:22-139.178.89.65:43956.service - OpenSSH per-connection server daemon (139.178.89.65:43956). Jun 25 18:22:46.935943 sshd[6126]: Accepted publickey for core from 139.178.89.65 port 43956 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:46.938664 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:46.947799 systemd-logind[2117]: New session 15 of user core. Jun 25 18:22:46.955115 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:22:47.217752 sshd[6126]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:47.223608 systemd-logind[2117]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:22:47.224902 systemd[1]: sshd@14-172.31.30.218:22-139.178.89.65:43956.service: Deactivated successfully. Jun 25 18:22:47.234356 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:22:47.236245 systemd-logind[2117]: Removed session 15. Jun 25 18:22:52.250079 systemd[1]: Started sshd@15-172.31.30.218:22-139.178.89.65:43958.service - OpenSSH per-connection server daemon (139.178.89.65:43958). Jun 25 18:22:52.442792 sshd[6145]: Accepted publickey for core from 139.178.89.65 port 43958 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:52.447116 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:52.460615 systemd-logind[2117]: New session 16 of user core. Jun 25 18:22:52.465993 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:22:52.789826 sshd[6145]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:52.798280 systemd[1]: sshd@15-172.31.30.218:22-139.178.89.65:43958.service: Deactivated successfully. Jun 25 18:22:52.809439 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:22:52.811447 systemd-logind[2117]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:22:52.825097 systemd[1]: Started sshd@16-172.31.30.218:22-139.178.89.65:43974.service - OpenSSH per-connection server daemon (139.178.89.65:43974). Jun 25 18:22:52.827837 systemd-logind[2117]: Removed session 16. Jun 25 18:22:53.043604 sshd[6161]: Accepted publickey for core from 139.178.89.65 port 43974 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:53.044889 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:53.057370 systemd-logind[2117]: New session 17 of user core. Jun 25 18:22:53.066997 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:22:53.607609 sshd[6161]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:53.617915 systemd-logind[2117]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:22:53.620509 systemd[1]: sshd@16-172.31.30.218:22-139.178.89.65:43974.service: Deactivated successfully. Jun 25 18:22:53.631230 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:22:53.651795 systemd[1]: Started sshd@17-172.31.30.218:22-139.178.89.65:43976.service - OpenSSH per-connection server daemon (139.178.89.65:43976). Jun 25 18:22:53.656434 systemd-logind[2117]: Removed session 17. Jun 25 18:22:53.838394 sshd[6173]: Accepted publickey for core from 139.178.89.65 port 43976 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:53.842404 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:53.857007 systemd-logind[2117]: New session 18 of user core. Jun 25 18:22:53.869805 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:22:55.787920 sshd[6173]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:55.804035 systemd-logind[2117]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:22:55.805447 systemd[1]: sshd@17-172.31.30.218:22-139.178.89.65:43976.service: Deactivated successfully. Jun 25 18:22:55.827867 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:22:55.840312 systemd[1]: Started sshd@18-172.31.30.218:22-139.178.89.65:43980.service - OpenSSH per-connection server daemon (139.178.89.65:43980). Jun 25 18:22:55.842315 systemd-logind[2117]: Removed session 18. Jun 25 18:22:55.922120 systemd[1]: run-containerd-runc-k8s.io-ddd3c98f86624d45fca205119dc8e16cd4e05ddd2962610f411cb42377a93e41-runc.dBG0wJ.mount: Deactivated successfully. Jun 25 18:22:56.043541 sshd[6190]: Accepted publickey for core from 139.178.89.65 port 43980 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:56.049803 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:56.070556 systemd-logind[2117]: New session 19 of user core. Jun 25 18:22:56.082235 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:22:56.967985 sshd[6190]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:56.982302 systemd[1]: sshd@18-172.31.30.218:22-139.178.89.65:43980.service: Deactivated successfully. Jun 25 18:22:56.997315 systemd-logind[2117]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:22:57.004286 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:22:57.019079 systemd[1]: Started sshd@19-172.31.30.218:22-139.178.89.65:60318.service - OpenSSH per-connection server daemon (139.178.89.65:60318). Jun 25 18:22:57.023748 systemd-logind[2117]: Removed session 19. Jun 25 18:22:57.219266 sshd[6228]: Accepted publickey for core from 139.178.89.65 port 60318 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:22:57.222746 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:22:57.233564 systemd-logind[2117]: New session 20 of user core. Jun 25 18:22:57.242498 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:22:57.501497 sshd[6228]: pam_unix(sshd:session): session closed for user core Jun 25 18:22:57.510134 systemd[1]: sshd@19-172.31.30.218:22-139.178.89.65:60318.service: Deactivated successfully. Jun 25 18:22:57.516066 systemd-logind[2117]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:22:57.516243 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:22:57.522669 systemd-logind[2117]: Removed session 20. Jun 25 18:23:02.532083 systemd[1]: Started sshd@20-172.31.30.218:22-139.178.89.65:60326.service - OpenSSH per-connection server daemon (139.178.89.65:60326). Jun 25 18:23:02.709195 sshd[6273]: Accepted publickey for core from 139.178.89.65 port 60326 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:02.712085 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:02.722325 systemd-logind[2117]: New session 21 of user core. Jun 25 18:23:02.729637 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:23:03.002756 sshd[6273]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:03.011443 systemd[1]: sshd@20-172.31.30.218:22-139.178.89.65:60326.service: Deactivated successfully. Jun 25 18:23:03.020301 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:23:03.022854 systemd-logind[2117]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:23:03.025779 systemd-logind[2117]: Removed session 21. Jun 25 18:23:07.324350 kubelet[3586]: I0625 18:23:07.323890 3586 topology_manager.go:215] "Topology Admit Handler" podUID="ea548693-1887-41c4-bdbf-5f2d40bb8a18" podNamespace="calico-apiserver" podName="calico-apiserver-c7ffdb5d6-wkfkp" Jun 25 18:23:07.366993 kubelet[3586]: I0625 18:23:07.366629 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh5k\" (UniqueName: \"kubernetes.io/projected/ea548693-1887-41c4-bdbf-5f2d40bb8a18-kube-api-access-bmh5k\") pod \"calico-apiserver-c7ffdb5d6-wkfkp\" (UID: \"ea548693-1887-41c4-bdbf-5f2d40bb8a18\") " pod="calico-apiserver/calico-apiserver-c7ffdb5d6-wkfkp" Jun 25 18:23:07.366993 kubelet[3586]: I0625 18:23:07.366906 3586 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea548693-1887-41c4-bdbf-5f2d40bb8a18-calico-apiserver-certs\") pod \"calico-apiserver-c7ffdb5d6-wkfkp\" (UID: \"ea548693-1887-41c4-bdbf-5f2d40bb8a18\") " pod="calico-apiserver/calico-apiserver-c7ffdb5d6-wkfkp" Jun 25 18:23:07.468774 kubelet[3586]: E0625 18:23:07.467926 3586 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:23:07.468774 kubelet[3586]: E0625 18:23:07.468085 3586 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea548693-1887-41c4-bdbf-5f2d40bb8a18-calico-apiserver-certs podName:ea548693-1887-41c4-bdbf-5f2d40bb8a18 nodeName:}" failed. No retries permitted until 2024-06-25 18:23:07.968052744 +0000 UTC m=+101.144678800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ea548693-1887-41c4-bdbf-5f2d40bb8a18-calico-apiserver-certs") pod "calico-apiserver-c7ffdb5d6-wkfkp" (UID: "ea548693-1887-41c4-bdbf-5f2d40bb8a18") : secret "calico-apiserver-certs" not found Jun 25 18:23:08.036036 systemd[1]: Started sshd@21-172.31.30.218:22-139.178.89.65:45206.service - OpenSSH per-connection server daemon (139.178.89.65:45206). Jun 25 18:23:08.220717 sshd[6299]: Accepted publickey for core from 139.178.89.65 port 45206 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:08.224500 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:08.238704 systemd-logind[2117]: New session 22 of user core. Jun 25 18:23:08.248413 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:23:08.257213 containerd[2137]: time="2024-06-25T18:23:08.255760132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7ffdb5d6-wkfkp,Uid:ea548693-1887-41c4-bdbf-5f2d40bb8a18,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:23:08.633865 sshd[6299]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:08.644653 systemd[1]: sshd@21-172.31.30.218:22-139.178.89.65:45206.service: Deactivated successfully. Jun 25 18:23:08.670775 systemd-logind[2117]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:23:08.672325 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:23:08.689187 systemd-logind[2117]: Removed session 22. Jun 25 18:23:08.690764 systemd-networkd[1695]: cali67bdaa74fa3: Link UP Jun 25 18:23:08.700871 systemd-networkd[1695]: cali67bdaa74fa3: Gained carrier Jun 25 18:23:08.709641 (udev-worker)[6332]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.391 [INFO][6304] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0 calico-apiserver-c7ffdb5d6- calico-apiserver ea548693-1887-41c4-bdbf-5f2d40bb8a18 1188 0 2024-06-25 18:23:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c7ffdb5d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-218 calico-apiserver-c7ffdb5d6-wkfkp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali67bdaa74fa3 [] []}} ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.392 [INFO][6304] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.484 [INFO][6321] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" HandleID="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Workload="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.524 [INFO][6321] ipam_plugin.go 264: Auto assigning IP ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" HandleID="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Workload="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ee1d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-218", "pod":"calico-apiserver-c7ffdb5d6-wkfkp", "timestamp":"2024-06-25 18:23:08.484856717 +0000 UTC"}, Hostname:"ip-172-31-30-218", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.526 [INFO][6321] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.526 [INFO][6321] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.527 [INFO][6321] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-218' Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.537 [INFO][6321] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.565 [INFO][6321] ipam.go 372: Looking up existing affinities for host host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.599 [INFO][6321] ipam.go 489: Trying affinity for 192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.614 [INFO][6321] ipam.go 155: Attempting to load block cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.619 [INFO][6321] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.87.192/26 host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.620 [INFO][6321] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.87.192/26 handle="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.622 [INFO][6321] ipam.go 1685: Creating new handle: k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.628 [INFO][6321] ipam.go 1203: Writing block in order to claim IPs block=192.168.87.192/26 handle="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.643 [INFO][6321] ipam.go 1216: Successfully claimed IPs: [192.168.87.197/26] block=192.168.87.192/26 handle="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.643 [INFO][6321] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.87.197/26] handle="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" host="ip-172-31-30-218" Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.643 [INFO][6321] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:23:08.758394 containerd[2137]: 2024-06-25 18:23:08.643 [INFO][6321] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.87.197/26] IPv6=[] ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" HandleID="k8s-pod-network.a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Workload="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.661 [INFO][6304] k8s.go 386: Populated endpoint ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0", GenerateName:"calico-apiserver-c7ffdb5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea548693-1887-41c4-bdbf-5f2d40bb8a18", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7ffdb5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"", Pod:"calico-apiserver-c7ffdb5d6-wkfkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali67bdaa74fa3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.661 [INFO][6304] k8s.go 387: Calico CNI using IPs: [192.168.87.197/32] ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.662 [INFO][6304] dataplane_linux.go 68: Setting the host side veth name to cali67bdaa74fa3 ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.698 [INFO][6304] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.701 [INFO][6304] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0", GenerateName:"calico-apiserver-c7ffdb5d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea548693-1887-41c4-bdbf-5f2d40bb8a18", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 23, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c7ffdb5d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-218", ContainerID:"a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a", Pod:"calico-apiserver-c7ffdb5d6-wkfkp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali67bdaa74fa3", MAC:"ba:be:7f:13:6e:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:23:08.761848 containerd[2137]: 2024-06-25 18:23:08.732 [INFO][6304] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a" Namespace="calico-apiserver" Pod="calico-apiserver-c7ffdb5d6-wkfkp" WorkloadEndpoint="ip--172--31--30--218-k8s-calico--apiserver--c7ffdb5d6--wkfkp-eth0" Jun 25 18:23:08.860523 containerd[2137]: time="2024-06-25T18:23:08.859830091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:23:08.863444 containerd[2137]: time="2024-06-25T18:23:08.862563811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:23:08.863444 containerd[2137]: time="2024-06-25T18:23:08.862639963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:23:08.863444 containerd[2137]: time="2024-06-25T18:23:08.862667503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:23:09.005613 containerd[2137]: time="2024-06-25T18:23:09.005548504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c7ffdb5d6-wkfkp,Uid:ea548693-1887-41c4-bdbf-5f2d40bb8a18,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a\"" Jun 25 18:23:09.007973 containerd[2137]: time="2024-06-25T18:23:09.007861960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:23:10.569740 systemd-networkd[1695]: cali67bdaa74fa3: Gained IPv6LL Jun 25 18:23:11.887374 containerd[2137]: time="2024-06-25T18:23:11.887287870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:23:11.889211 containerd[2137]: time="2024-06-25T18:23:11.889107934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 18:23:11.892588 containerd[2137]: time="2024-06-25T18:23:11.891682894Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:23:11.898947 containerd[2137]: time="2024-06-25T18:23:11.898860466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:23:11.903570 containerd[2137]: time="2024-06-25T18:23:11.902763814Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.894834006s" Jun 25 18:23:11.903925 containerd[2137]: time="2024-06-25T18:23:11.902990782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 18:23:11.912329 containerd[2137]: time="2024-06-25T18:23:11.911818798Z" level=info msg="CreateContainer within sandbox \"a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:23:11.937866 containerd[2137]: time="2024-06-25T18:23:11.937667843Z" level=info msg="CreateContainer within sandbox \"a8db586f6accedaab099d55d107907a762f702e1fe214caa2ac6e20aaf016f1a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"15d72d84c04325a120a62ad103ca6f6232cd852dd364a5c1ae0a1c5f97f2fae0\"" Jun 25 18:23:11.939061 containerd[2137]: time="2024-06-25T18:23:11.938868539Z" level=info msg="StartContainer for \"15d72d84c04325a120a62ad103ca6f6232cd852dd364a5c1ae0a1c5f97f2fae0\"" Jun 25 18:23:12.101854 containerd[2137]: time="2024-06-25T18:23:12.101011483Z" level=info msg="StartContainer for \"15d72d84c04325a120a62ad103ca6f6232cd852dd364a5c1ae0a1c5f97f2fae0\" returns successfully" Jun 25 18:23:13.035087 kubelet[3586]: I0625 18:23:13.032432 3586 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c7ffdb5d6-wkfkp" podStartSLOduration=3.13553013 podCreationTimestamp="2024-06-25 18:23:07 +0000 UTC" firstStartedPulling="2024-06-25 18:23:09.007324672 +0000 UTC m=+102.183950728" lastFinishedPulling="2024-06-25 18:23:11.904162762 +0000 UTC m=+105.080788818" observedRunningTime="2024-06-25 18:23:13.030211628 +0000 UTC m=+106.206837684" watchObservedRunningTime="2024-06-25 18:23:13.03236822 +0000 UTC m=+106.208994300" Jun 25 18:23:13.550303 ntpd[2097]: Listen normally on 12 cali67bdaa74fa3 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:23:13.555079 ntpd[2097]: 25 Jun 18:23:13 ntpd[2097]: Listen normally on 12 cali67bdaa74fa3 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:23:13.669035 systemd[1]: Started sshd@22-172.31.30.218:22-139.178.89.65:45208.service - OpenSSH per-connection server daemon (139.178.89.65:45208). Jun 25 18:23:13.850373 sshd[6448]: Accepted publickey for core from 139.178.89.65 port 45208 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:13.854027 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:13.875822 systemd-logind[2117]: New session 23 of user core. Jun 25 18:23:13.888987 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:23:14.161233 sshd[6448]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:14.170074 systemd[1]: sshd@22-172.31.30.218:22-139.178.89.65:45208.service: Deactivated successfully. Jun 25 18:23:14.170388 systemd-logind[2117]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:23:14.177194 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:23:14.178858 systemd-logind[2117]: Removed session 23. Jun 25 18:23:19.196937 systemd[1]: Started sshd@23-172.31.30.218:22-139.178.89.65:34142.service - OpenSSH per-connection server daemon (139.178.89.65:34142). Jun 25 18:23:19.366142 sshd[6466]: Accepted publickey for core from 139.178.89.65 port 34142 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:19.368936 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:19.378115 systemd-logind[2117]: New session 24 of user core. Jun 25 18:23:19.384113 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:23:19.632486 sshd[6466]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:19.637394 systemd[1]: sshd@23-172.31.30.218:22-139.178.89.65:34142.service: Deactivated successfully. Jun 25 18:23:19.646054 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:23:19.652301 systemd-logind[2117]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:23:19.654356 systemd-logind[2117]: Removed session 24. Jun 25 18:23:24.661064 systemd[1]: Started sshd@24-172.31.30.218:22-139.178.89.65:34156.service - OpenSSH per-connection server daemon (139.178.89.65:34156). Jun 25 18:23:24.831646 sshd[6503]: Accepted publickey for core from 139.178.89.65 port 34156 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:24.834634 sshd[6503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:24.846903 systemd-logind[2117]: New session 25 of user core. Jun 25 18:23:24.856168 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:23:25.098387 sshd[6503]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:25.118672 systemd[1]: sshd@24-172.31.30.218:22-139.178.89.65:34156.service: Deactivated successfully. Jun 25 18:23:25.125510 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:23:25.130711 systemd-logind[2117]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:23:25.134989 systemd-logind[2117]: Removed session 25. Jun 25 18:23:30.134517 systemd[1]: Started sshd@25-172.31.30.218:22-139.178.89.65:56922.service - OpenSSH per-connection server daemon (139.178.89.65:56922). Jun 25 18:23:30.310956 sshd[6544]: Accepted publickey for core from 139.178.89.65 port 56922 ssh2: RSA SHA256:PKLEBy2HMUOnZIR0iwjDMXjr5aDYRcW0FQA7rLNVs1M Jun 25 18:23:30.313959 sshd[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:23:30.324163 systemd-logind[2117]: New session 26 of user core. Jun 25 18:23:30.330270 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:23:30.586655 sshd[6544]: pam_unix(sshd:session): session closed for user core Jun 25 18:23:30.595326 systemd[1]: sshd@25-172.31.30.218:22-139.178.89.65:56922.service: Deactivated successfully. Jun 25 18:23:30.601709 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:23:30.604906 systemd-logind[2117]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:23:30.607719 systemd-logind[2117]: Removed session 26. Jun 25 18:24:05.526041 containerd[2137]: time="2024-06-25T18:24:05.525914017Z" level=info msg="shim disconnected" id=4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385 namespace=k8s.io Jun 25 18:24:05.527608 containerd[2137]: time="2024-06-25T18:24:05.526564897Z" level=warning msg="cleaning up after shim disconnected" id=4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385 namespace=k8s.io Jun 25 18:24:05.527608 containerd[2137]: time="2024-06-25T18:24:05.526597225Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:24:05.529223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385-rootfs.mount: Deactivated successfully. Jun 25 18:24:06.183994 kubelet[3586]: I0625 18:24:06.183159 3586 scope.go:117] "RemoveContainer" containerID="4531f26531ce047d111eccbbc4a77d378ead54c8ac1246aff79d7c4a08ece385" Jun 25 18:24:06.189509 containerd[2137]: time="2024-06-25T18:24:06.189377244Z" level=info msg="CreateContainer within sandbox \"c809031336c8fbb8d6f95f4aabb894d95f85c3a25e551f1462511cc090181811\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 18:24:06.251110 containerd[2137]: time="2024-06-25T18:24:06.250878576Z" level=info msg="CreateContainer within sandbox \"c809031336c8fbb8d6f95f4aabb894d95f85c3a25e551f1462511cc090181811\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"20c7cc18fd89d6a9f99dd1da3401a4b17d92fab338e1e3370b63affaae29fa33\"" Jun 25 18:24:06.251890 containerd[2137]: time="2024-06-25T18:24:06.251823012Z" level=info msg="StartContainer for \"20c7cc18fd89d6a9f99dd1da3401a4b17d92fab338e1e3370b63affaae29fa33\"" Jun 25 18:24:06.378985 containerd[2137]: time="2024-06-25T18:24:06.378723529Z" level=info msg="StartContainer for \"20c7cc18fd89d6a9f99dd1da3401a4b17d92fab338e1e3370b63affaae29fa33\" returns successfully" Jun 25 18:24:06.819186 containerd[2137]: time="2024-06-25T18:24:06.819058935Z" level=info msg="shim disconnected" id=bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c namespace=k8s.io Jun 25 18:24:06.824630 containerd[2137]: time="2024-06-25T18:24:06.819201315Z" level=warning msg="cleaning up after shim disconnected" id=bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c namespace=k8s.io Jun 25 18:24:06.824630 containerd[2137]: time="2024-06-25T18:24:06.819225327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:24:06.833967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c-rootfs.mount: Deactivated successfully. Jun 25 18:24:07.189634 kubelet[3586]: I0625 18:24:07.188236 3586 scope.go:117] "RemoveContainer" containerID="bd552d35048123e4bd4c2e4650929ff2f2c31f6b5e8bf5d0def2b0a75e6fe36c" Jun 25 18:24:07.196095 containerd[2137]: time="2024-06-25T18:24:07.195868645Z" level=info msg="CreateContainer within sandbox \"bc83a5ee8e3d7a7814040eaa6d1ea6bde280551dd72aba83c5b50eed9ce4053d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 18:24:07.266663 containerd[2137]: time="2024-06-25T18:24:07.261273265Z" level=info msg="CreateContainer within sandbox \"bc83a5ee8e3d7a7814040eaa6d1ea6bde280551dd72aba83c5b50eed9ce4053d\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1464842ac390b7dce77764b8b24ebb66328974d5040e2deedaae4377347dd398\"" Jun 25 18:24:07.266663 containerd[2137]: time="2024-06-25T18:24:07.263053837Z" level=info msg="StartContainer for \"1464842ac390b7dce77764b8b24ebb66328974d5040e2deedaae4377347dd398\"" Jun 25 18:24:07.429353 containerd[2137]: time="2024-06-25T18:24:07.429297026Z" level=info msg="StartContainer for \"1464842ac390b7dce77764b8b24ebb66328974d5040e2deedaae4377347dd398\" returns successfully" Jun 25 18:24:10.378005 kubelet[3586]: E0625 18:24:10.377953 3586 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:24:12.016744 containerd[2137]: time="2024-06-25T18:24:12.015435737Z" level=info msg="shim disconnected" id=9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e namespace=k8s.io Jun 25 18:24:12.018562 containerd[2137]: time="2024-06-25T18:24:12.016758233Z" level=warning msg="cleaning up after shim disconnected" id=9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e namespace=k8s.io Jun 25 18:24:12.018562 containerd[2137]: time="2024-06-25T18:24:12.016786865Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:24:12.022721 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e-rootfs.mount: Deactivated successfully. Jun 25 18:24:12.222512 kubelet[3586]: I0625 18:24:12.222081 3586 scope.go:117] "RemoveContainer" containerID="9804bec19c95056dd107242248f1a67db94f381bbb7147cacfb25fccca160f8e" Jun 25 18:24:12.227385 containerd[2137]: time="2024-06-25T18:24:12.227308374Z" level=info msg="CreateContainer within sandbox \"cc367f281e2cb6239834933ad2f6ba8ee769955eb6b80b784311725cb237b59d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 18:24:12.255387 containerd[2137]: time="2024-06-25T18:24:12.255216786Z" level=info msg="CreateContainer within sandbox \"cc367f281e2cb6239834933ad2f6ba8ee769955eb6b80b784311725cb237b59d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"60687a09a5c573318e99df62cc685d0f844df261bb0e1ea4c21b9fda96b7204c\"" Jun 25 18:24:12.256545 containerd[2137]: time="2024-06-25T18:24:12.256400418Z" level=info msg="StartContainer for \"60687a09a5c573318e99df62cc685d0f844df261bb0e1ea4c21b9fda96b7204c\"" Jun 25 18:24:12.390081 containerd[2137]: time="2024-06-25T18:24:12.389848315Z" level=info msg="StartContainer for \"60687a09a5c573318e99df62cc685d0f844df261bb0e1ea4c21b9fda96b7204c\" returns successfully" Jun 25 18:24:20.378944 kubelet[3586]: E0625 18:24:20.378896 3586 controller.go:193] "Failed to update lease" err="Put \"https://172.31.30.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-218?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 18:24:22.977417 systemd[1]: run-containerd-runc-k8s.io-cf0e541bcf57a54ec6736076435c244b4d865c1ea97fe00ac70fcbfb4f206bfd-runc.PTIPEL.mount: Deactivated successfully.