Nov 12 17:40:46.201970 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 12 17:40:46.202049 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:40:46.202075 kernel: KASLR disabled due to lack of seed Nov 12 17:40:46.202092 kernel: efi: EFI v2.7 by EDK II Nov 12 17:40:46.202108 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Nov 12 17:40:46.202124 kernel: ACPI: Early table checksum verification disabled Nov 12 17:40:46.202142 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 12 17:40:46.202158 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 12 17:40:46.202174 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 12 17:40:46.202191 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 12 17:40:46.202211 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 12 17:40:46.202227 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 12 17:40:46.202243 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 12 17:40:46.202259 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 12 17:40:46.202279 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 12 17:40:46.202299 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 12 17:40:46.202317 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 12 17:40:46.202334 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 12 17:40:46.202351 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 12 17:40:46.202368 kernel: printk: bootconsole [uart0] enabled Nov 12 17:40:46.202385 kernel: NUMA: Failed to initialise from firmware Nov 12 17:40:46.202402 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:40:46.202419 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Nov 12 17:40:46.202436 kernel: Zone ranges: Nov 12 17:40:46.202452 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 12 17:40:46.202469 kernel: DMA32 empty Nov 12 17:40:46.202490 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 12 17:40:46.202507 kernel: Movable zone start for each node Nov 12 17:40:46.202523 kernel: Early memory node ranges Nov 12 17:40:46.202540 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 12 17:40:46.202557 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 12 17:40:46.202573 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 12 17:40:46.202590 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 12 17:40:46.202607 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 12 17:40:46.202624 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 12 17:40:46.202640 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 12 17:40:46.202658 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 12 17:40:46.202675 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:40:46.202696 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 12 17:40:46.202713 kernel: psci: probing for conduit method from ACPI. Nov 12 17:40:46.202736 kernel: psci: PSCIv1.0 detected in firmware. Nov 12 17:40:46.202754 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:40:46.202772 kernel: psci: Trusted OS migration not required Nov 12 17:40:46.202794 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:40:46.202812 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:40:46.202830 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:40:46.202849 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 12 17:40:46.202866 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:40:46.202884 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:40:46.202902 kernel: CPU features: detected: Spectre-v2 Nov 12 17:40:46.202919 kernel: CPU features: detected: Spectre-v3a Nov 12 17:40:46.202937 kernel: CPU features: detected: Spectre-BHB Nov 12 17:40:46.202955 kernel: CPU features: detected: ARM erratum 1742098 Nov 12 17:40:46.202973 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 12 17:40:46.204589 kernel: alternatives: applying boot alternatives Nov 12 17:40:46.204616 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:40:46.204636 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:40:46.204655 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:40:46.204674 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:40:46.204692 kernel: Fallback order for Node 0: 0 Nov 12 17:40:46.204710 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 12 17:40:46.204729 kernel: Policy zone: Normal Nov 12 17:40:46.204747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:40:46.204765 kernel: software IO TLB: area num 2. Nov 12 17:40:46.204783 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 12 17:40:46.204807 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Nov 12 17:40:46.204825 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 17:40:46.204845 kernel: trace event string verifier disabled Nov 12 17:40:46.204863 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:40:46.204882 kernel: rcu: RCU event tracing is enabled. Nov 12 17:40:46.204900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 17:40:46.204919 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:40:46.204937 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:40:46.204955 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:40:46.204973 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 17:40:46.205024 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:40:46.205057 kernel: GICv3: 96 SPIs implemented Nov 12 17:40:46.206051 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:40:46.206081 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:40:46.206099 kernel: GICv3: GICv3 features: 16 PPIs Nov 12 17:40:46.206117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 12 17:40:46.206135 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 12 17:40:46.206154 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:40:46.206172 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:40:46.206190 kernel: GICv3: using LPI property table @0x00000004000d0000 Nov 12 17:40:46.206208 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 12 17:40:46.206226 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Nov 12 17:40:46.206243 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:40:46.206269 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 12 17:40:46.206287 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 12 17:40:46.206305 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 12 17:40:46.206323 kernel: Console: colour dummy device 80x25 Nov 12 17:40:46.206341 kernel: printk: console [tty1] enabled Nov 12 17:40:46.206359 kernel: ACPI: Core revision 20230628 Nov 12 17:40:46.206378 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 12 17:40:46.206396 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:40:46.206414 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:40:46.206432 kernel: landlock: Up and running. Nov 12 17:40:46.206454 kernel: SELinux: Initializing. Nov 12 17:40:46.206472 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:40:46.206490 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:40:46.206509 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:40:46.206527 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:40:46.206545 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:40:46.206565 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:40:46.206583 kernel: Platform MSI: ITS@0x10080000 domain created Nov 12 17:40:46.206605 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 12 17:40:46.206623 kernel: Remapping and enabling EFI services. Nov 12 17:40:46.206641 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:40:46.206659 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:40:46.206677 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 12 17:40:46.206695 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Nov 12 17:40:46.206713 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 12 17:40:46.206732 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 17:40:46.206750 kernel: SMP: Total of 2 processors activated. Nov 12 17:40:46.206767 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:40:46.206790 kernel: CPU features: detected: 32-bit EL1 Support Nov 12 17:40:46.206808 kernel: CPU features: detected: CRC32 instructions Nov 12 17:40:46.206837 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:40:46.206859 kernel: alternatives: applying system-wide alternatives Nov 12 17:40:46.206878 kernel: devtmpfs: initialized Nov 12 17:40:46.206897 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:40:46.206915 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 17:40:46.206934 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:40:46.206953 kernel: SMBIOS 3.0.0 present. Nov 12 17:40:46.206976 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 12 17:40:46.208071 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:40:46.208093 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:40:46.208112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:40:46.208132 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:40:46.208151 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:40:46.208170 kernel: audit: type=2000 audit(0.293:1): state=initialized audit_enabled=0 res=1 Nov 12 17:40:46.208197 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:40:46.208216 kernel: cpuidle: using governor menu Nov 12 17:40:46.208235 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:40:46.208254 kernel: ASID allocator initialised with 65536 entries Nov 12 17:40:46.208273 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:40:46.208291 kernel: Serial: AMBA PL011 UART driver Nov 12 17:40:46.208311 kernel: Modules: 17520 pages in range for non-PLT usage Nov 12 17:40:46.208329 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:40:46.208348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:40:46.208372 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:40:46.208391 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:40:46.208410 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:40:46.208429 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:40:46.208447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:40:46.208466 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:40:46.208485 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:40:46.208504 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:40:46.208523 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:40:46.208545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:40:46.208564 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:40:46.208583 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:40:46.208602 kernel: ACPI: Interpreter enabled Nov 12 17:40:46.208621 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:40:46.208639 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:40:46.208658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 12 17:40:46.209001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:40:46.209231 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:40:46.209465 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:40:46.209685 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 12 17:40:46.209895 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 12 17:40:46.209921 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 12 17:40:46.209940 kernel: acpiphp: Slot [1] registered Nov 12 17:40:46.209959 kernel: acpiphp: Slot [2] registered Nov 12 17:40:46.212037 kernel: acpiphp: Slot [3] registered Nov 12 17:40:46.212099 kernel: acpiphp: Slot [4] registered Nov 12 17:40:46.212120 kernel: acpiphp: Slot [5] registered Nov 12 17:40:46.212139 kernel: acpiphp: Slot [6] registered Nov 12 17:40:46.212159 kernel: acpiphp: Slot [7] registered Nov 12 17:40:46.212177 kernel: acpiphp: Slot [8] registered Nov 12 17:40:46.212196 kernel: acpiphp: Slot [9] registered Nov 12 17:40:46.212215 kernel: acpiphp: Slot [10] registered Nov 12 17:40:46.212234 kernel: acpiphp: Slot [11] registered Nov 12 17:40:46.212253 kernel: acpiphp: Slot [12] registered Nov 12 17:40:46.212271 kernel: acpiphp: Slot [13] registered Nov 12 17:40:46.212295 kernel: acpiphp: Slot [14] registered Nov 12 17:40:46.212314 kernel: acpiphp: Slot [15] registered Nov 12 17:40:46.212333 kernel: acpiphp: Slot [16] registered Nov 12 17:40:46.212351 kernel: acpiphp: Slot [17] registered Nov 12 17:40:46.212370 kernel: acpiphp: Slot [18] registered Nov 12 17:40:46.212389 kernel: acpiphp: Slot [19] registered Nov 12 17:40:46.212407 kernel: acpiphp: Slot [20] registered Nov 12 17:40:46.212426 kernel: acpiphp: Slot [21] registered Nov 12 17:40:46.212445 kernel: acpiphp: Slot [22] registered Nov 12 17:40:46.212467 kernel: acpiphp: Slot [23] registered Nov 12 17:40:46.212486 kernel: acpiphp: Slot [24] registered Nov 12 17:40:46.212505 kernel: acpiphp: Slot [25] registered Nov 12 17:40:46.212523 kernel: acpiphp: Slot [26] registered Nov 12 17:40:46.212542 kernel: acpiphp: Slot [27] registered Nov 12 17:40:46.212560 kernel: acpiphp: Slot [28] registered Nov 12 17:40:46.212579 kernel: acpiphp: Slot [29] registered Nov 12 17:40:46.212598 kernel: acpiphp: Slot [30] registered Nov 12 17:40:46.212616 kernel: acpiphp: Slot [31] registered Nov 12 17:40:46.212635 kernel: PCI host bridge to bus 0000:00 Nov 12 17:40:46.212921 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 12 17:40:46.213156 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:40:46.213351 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 12 17:40:46.213568 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 12 17:40:46.213808 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 12 17:40:46.216140 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 12 17:40:46.216390 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 12 17:40:46.216631 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 12 17:40:46.216846 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 12 17:40:46.218039 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:40:46.218319 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 12 17:40:46.218539 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 12 17:40:46.218750 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 12 17:40:46.218970 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 12 17:40:46.219655 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:40:46.219868 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 12 17:40:46.220129 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 12 17:40:46.220344 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 12 17:40:46.220556 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 12 17:40:46.220770 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 12 17:40:46.220968 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 12 17:40:46.222244 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:40:46.222453 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 12 17:40:46.222481 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:40:46.222502 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:40:46.222522 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:40:46.222542 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:40:46.222562 kernel: iommu: Default domain type: Translated Nov 12 17:40:46.222593 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:40:46.222613 kernel: efivars: Registered efivars operations Nov 12 17:40:46.222632 kernel: vgaarb: loaded Nov 12 17:40:46.222652 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:40:46.222671 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:40:46.222691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:40:46.222710 kernel: pnp: PnP ACPI init Nov 12 17:40:46.222927 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 12 17:40:46.222960 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:40:46.223003 kernel: NET: Registered PF_INET protocol family Nov 12 17:40:46.223028 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:40:46.223048 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:40:46.223067 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:40:46.223087 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:40:46.223106 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:40:46.223125 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:40:46.223144 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:40:46.223169 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:40:46.223188 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:40:46.223207 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:40:46.223226 kernel: kvm [1]: HYP mode not available Nov 12 17:40:46.223244 kernel: Initialise system trusted keyrings Nov 12 17:40:46.223263 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:40:46.223282 kernel: Key type asymmetric registered Nov 12 17:40:46.223300 kernel: Asymmetric key parser 'x509' registered Nov 12 17:40:46.223319 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:40:46.223342 kernel: io scheduler mq-deadline registered Nov 12 17:40:46.223361 kernel: io scheduler kyber registered Nov 12 17:40:46.223379 kernel: io scheduler bfq registered Nov 12 17:40:46.223602 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 12 17:40:46.223630 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:40:46.223650 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:40:46.223669 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 12 17:40:46.223688 kernel: ACPI: button: Sleep Button [SLPB] Nov 12 17:40:46.223712 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:40:46.223732 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 12 17:40:46.223951 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 12 17:40:46.223997 kernel: printk: console [ttyS0] disabled Nov 12 17:40:46.224023 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 12 17:40:46.224043 kernel: printk: console [ttyS0] enabled Nov 12 17:40:46.224063 kernel: printk: bootconsole [uart0] disabled Nov 12 17:40:46.224082 kernel: thunder_xcv, ver 1.0 Nov 12 17:40:46.224101 kernel: thunder_bgx, ver 1.0 Nov 12 17:40:46.224120 kernel: nicpf, ver 1.0 Nov 12 17:40:46.224147 kernel: nicvf, ver 1.0 Nov 12 17:40:46.224406 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:40:46.224624 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:40:45 UTC (1731433245) Nov 12 17:40:46.224651 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:40:46.224670 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 12 17:40:46.224689 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:40:46.224708 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:40:46.224734 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:40:46.224753 kernel: Segment Routing with IPv6 Nov 12 17:40:46.224772 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:40:46.224790 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:40:46.224809 kernel: Key type dns_resolver registered Nov 12 17:40:46.224828 kernel: registered taskstats version 1 Nov 12 17:40:46.224847 kernel: Loading compiled-in X.509 certificates Nov 12 17:40:46.224866 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:40:46.224884 kernel: Key type .fscrypt registered Nov 12 17:40:46.224903 kernel: Key type fscrypt-provisioning registered Nov 12 17:40:46.224926 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:40:46.224945 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:40:46.224964 kernel: ima: No architecture policies found Nov 12 17:40:46.225019 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:40:46.225042 kernel: clk: Disabling unused clocks Nov 12 17:40:46.225061 kernel: Freeing unused kernel memory: 39360K Nov 12 17:40:46.225081 kernel: Run /init as init process Nov 12 17:40:46.225100 kernel: with arguments: Nov 12 17:40:46.225119 kernel: /init Nov 12 17:40:46.225144 kernel: with environment: Nov 12 17:40:46.225163 kernel: HOME=/ Nov 12 17:40:46.225182 kernel: TERM=linux Nov 12 17:40:46.225200 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:40:46.225224 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:40:46.225248 systemd[1]: Detected virtualization amazon. Nov 12 17:40:46.225269 systemd[1]: Detected architecture arm64. Nov 12 17:40:46.225293 systemd[1]: Running in initrd. Nov 12 17:40:46.225313 systemd[1]: No hostname configured, using default hostname. Nov 12 17:40:46.225334 systemd[1]: Hostname set to . Nov 12 17:40:46.225355 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:40:46.225375 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:40:46.225396 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:46.225435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:46.225458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:40:46.225485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:40:46.225506 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:40:46.225527 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:40:46.225551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:40:46.225573 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:40:46.225593 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:46.225614 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:46.225640 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:40:46.225661 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:40:46.225682 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:40:46.225702 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:40:46.225723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:40:46.225744 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:40:46.225765 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:40:46.225786 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:40:46.225806 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:46.225831 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:46.225852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:46.225873 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:40:46.225894 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:40:46.225939 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:40:46.225966 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:40:46.226069 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:40:46.226093 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:40:46.226120 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:40:46.226141 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:46.226162 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:40:46.226183 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:46.226204 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:40:46.226262 systemd-journald[251]: Collecting audit messages is disabled. Nov 12 17:40:46.226312 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:40:46.226334 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:46.226354 systemd-journald[251]: Journal started Nov 12 17:40:46.226396 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2d90ccc262ecd1d3e67c2696a3e640) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:40:46.195461 systemd-modules-load[252]: Inserted module 'overlay' Nov 12 17:40:46.235806 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:40:46.235887 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:40:46.238033 kernel: Bridge firewalling registered Nov 12 17:40:46.238053 systemd-modules-load[252]: Inserted module 'br_netfilter' Nov 12 17:40:46.246350 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:46.255083 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:46.270407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:46.286424 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:46.291250 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:40:46.304454 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:40:46.321625 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:46.344667 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:46.361819 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:46.372365 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:40:46.374701 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:46.390274 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:40:46.412963 dracut-cmdline[286]: dracut-dracut-053 Nov 12 17:40:46.421216 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:40:46.472706 systemd-resolved[288]: Positive Trust Anchors: Nov 12 17:40:46.472744 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:40:46.472807 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:40:46.579010 kernel: SCSI subsystem initialized Nov 12 17:40:46.586019 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:40:46.599038 kernel: iscsi: registered transport (tcp) Nov 12 17:40:46.621028 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:40:46.621097 kernel: QLogic iSCSI HBA Driver Nov 12 17:40:46.710010 kernel: random: crng init done Nov 12 17:40:46.710349 systemd-resolved[288]: Defaulting to hostname 'linux'. Nov 12 17:40:46.713774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:40:46.729676 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:46.739536 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:40:46.749327 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:40:46.794414 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:40:46.794491 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:40:46.794519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:40:46.862034 kernel: raid6: neonx8 gen() 6720 MB/s Nov 12 17:40:46.879015 kernel: raid6: neonx4 gen() 6541 MB/s Nov 12 17:40:46.896015 kernel: raid6: neonx2 gen() 5450 MB/s Nov 12 17:40:46.913014 kernel: raid6: neonx1 gen() 3955 MB/s Nov 12 17:40:46.930019 kernel: raid6: int64x8 gen() 3801 MB/s Nov 12 17:40:46.947018 kernel: raid6: int64x4 gen() 3714 MB/s Nov 12 17:40:46.964021 kernel: raid6: int64x2 gen() 3603 MB/s Nov 12 17:40:46.981831 kernel: raid6: int64x1 gen() 2768 MB/s Nov 12 17:40:46.981876 kernel: raid6: using algorithm neonx8 gen() 6720 MB/s Nov 12 17:40:46.999802 kernel: raid6: .... xor() 4847 MB/s, rmw enabled Nov 12 17:40:46.999860 kernel: raid6: using neon recovery algorithm Nov 12 17:40:47.008507 kernel: xor: measuring software checksum speed Nov 12 17:40:47.008609 kernel: 8regs : 10956 MB/sec Nov 12 17:40:47.009605 kernel: 32regs : 11955 MB/sec Nov 12 17:40:47.010794 kernel: arm64_neon : 9567 MB/sec Nov 12 17:40:47.010827 kernel: xor: using function: 32regs (11955 MB/sec) Nov 12 17:40:47.095035 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:40:47.115036 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:40:47.136281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:47.175327 systemd-udevd[470]: Using default interface naming scheme 'v255'. Nov 12 17:40:47.184904 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:47.198241 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:40:47.238262 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Nov 12 17:40:47.299090 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:40:47.310288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:40:47.436749 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:47.450382 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:40:47.497228 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:40:47.505324 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:40:47.521601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:47.524121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:40:47.547842 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:40:47.596496 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:40:47.637964 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:40:47.638052 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 12 17:40:47.656901 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 12 17:40:47.657210 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 12 17:40:47.657495 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:22:20:30:ae:15 Nov 12 17:40:47.657547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:40:47.657779 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:47.663907 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:40:47.669056 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:47.677655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:40:47.679761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:47.693237 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:47.702402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:47.727483 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 12 17:40:47.727549 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 12 17:40:47.739031 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 12 17:40:47.747071 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:47.757846 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:40:47.757913 kernel: GPT:9289727 != 16777215 Nov 12 17:40:47.757940 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:40:47.758690 kernel: GPT:9289727 != 16777215 Nov 12 17:40:47.759748 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:40:47.760684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:47.761021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:47.805295 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:47.881032 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (531) Nov 12 17:40:47.899043 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (526) Nov 12 17:40:47.944717 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 12 17:40:48.000162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 12 17:40:48.027524 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:40:48.042224 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 12 17:40:48.043054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 12 17:40:48.057330 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:40:48.074244 disk-uuid[664]: Primary Header is updated. Nov 12 17:40:48.074244 disk-uuid[664]: Secondary Entries is updated. Nov 12 17:40:48.074244 disk-uuid[664]: Secondary Header is updated. Nov 12 17:40:48.083217 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:48.093033 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:48.102024 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:49.101116 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:49.102203 disk-uuid[665]: The operation has completed successfully. Nov 12 17:40:49.295310 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:40:49.295902 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:40:49.359292 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:40:49.378959 sh[1008]: Success Nov 12 17:40:49.403030 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:40:49.518721 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:40:49.537441 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:40:49.546613 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:40:49.570679 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:40:49.570741 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:49.570779 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:40:49.572039 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:40:49.573126 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:40:49.692009 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 17:40:49.707696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:40:49.711656 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:40:49.720460 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:40:49.733522 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:40:49.767917 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:49.769043 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:49.769078 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:49.776012 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:49.795617 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:40:49.798101 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:49.808931 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:40:49.822592 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:40:49.917039 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:40:49.930296 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:40:49.991566 systemd-networkd[1200]: lo: Link UP Nov 12 17:40:49.991591 systemd-networkd[1200]: lo: Gained carrier Nov 12 17:40:49.996633 systemd-networkd[1200]: Enumeration completed Nov 12 17:40:49.996792 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:40:49.999084 systemd[1]: Reached target network.target - Network. Nov 12 17:40:50.006028 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:50.006047 systemd-networkd[1200]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:40:50.014144 systemd-networkd[1200]: eth0: Link UP Nov 12 17:40:50.014162 systemd-networkd[1200]: eth0: Gained carrier Nov 12 17:40:50.014180 systemd-networkd[1200]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:50.033071 systemd-networkd[1200]: eth0: DHCPv4 address 172.31.27.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:40:50.190488 ignition[1123]: Ignition 2.19.0 Nov 12 17:40:50.190510 ignition[1123]: Stage: fetch-offline Nov 12 17:40:50.191124 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.196045 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:40:50.191149 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.191647 ignition[1123]: Ignition finished successfully Nov 12 17:40:50.213719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 17:40:50.237402 ignition[1209]: Ignition 2.19.0 Nov 12 17:40:50.237425 ignition[1209]: Stage: fetch Nov 12 17:40:50.238083 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.238108 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.238262 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.250214 ignition[1209]: PUT result: OK Nov 12 17:40:50.253311 ignition[1209]: parsed url from cmdline: "" Nov 12 17:40:50.253328 ignition[1209]: no config URL provided Nov 12 17:40:50.253344 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:40:50.253370 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:40:50.253422 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.255053 ignition[1209]: PUT result: OK Nov 12 17:40:50.255130 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 12 17:40:50.259217 ignition[1209]: GET result: OK Nov 12 17:40:50.265463 ignition[1209]: parsing config with SHA512: ef319ffc005154e9f5d3389ca082a3ff403cb075aee6a35cc84778b3da0d8e0b06f0cb1f40a044afacbc058b84469aab343293ec4a8972ee2eee7c643dbafff5 Nov 12 17:40:50.274426 unknown[1209]: fetched base config from "system" Nov 12 17:40:50.274468 unknown[1209]: fetched base config from "system" Nov 12 17:40:50.274484 unknown[1209]: fetched user config from "aws" Nov 12 17:40:50.277857 ignition[1209]: fetch: fetch complete Nov 12 17:40:50.277871 ignition[1209]: fetch: fetch passed Nov 12 17:40:50.278134 ignition[1209]: Ignition finished successfully Nov 12 17:40:50.286179 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 17:40:50.306424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:40:50.331667 ignition[1215]: Ignition 2.19.0 Nov 12 17:40:50.332295 ignition[1215]: Stage: kargs Nov 12 17:40:50.332944 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.332969 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.333186 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.337472 ignition[1215]: PUT result: OK Nov 12 17:40:50.346192 ignition[1215]: kargs: kargs passed Nov 12 17:40:50.346482 ignition[1215]: Ignition finished successfully Nov 12 17:40:50.351425 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:40:50.367222 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:40:50.388688 ignition[1221]: Ignition 2.19.0 Nov 12 17:40:50.388717 ignition[1221]: Stage: disks Nov 12 17:40:50.389683 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.389708 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.389860 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.397810 ignition[1221]: PUT result: OK Nov 12 17:40:50.402390 ignition[1221]: disks: disks passed Nov 12 17:40:50.402509 ignition[1221]: Ignition finished successfully Nov 12 17:40:50.407061 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:40:50.409843 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:40:50.413040 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:40:50.415439 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:40:50.417318 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:40:50.419273 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:40:50.439203 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:40:50.474067 systemd-fsck[1229]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:40:50.481493 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:40:50.500464 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:40:50.576029 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:40:50.576823 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:40:50.581423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:40:50.597181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:40:50.607543 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:40:50.612000 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:40:50.612090 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:40:50.612139 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:40:50.631975 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:40:50.643899 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:40:50.651038 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1248) Nov 12 17:40:50.655017 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:50.655124 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:50.655153 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:50.669275 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:50.671200 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:40:51.146206 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:40:51.178241 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:40:51.186131 systemd-networkd[1200]: eth0: Gained IPv6LL Nov 12 17:40:51.189729 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:40:51.199017 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:40:51.599282 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:40:51.611185 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:40:51.632011 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:40:51.646960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:40:51.649298 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:51.687105 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:40:51.700048 ignition[1361]: INFO : Ignition 2.19.0 Nov 12 17:40:51.700048 ignition[1361]: INFO : Stage: mount Nov 12 17:40:51.703473 ignition[1361]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:51.703473 ignition[1361]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:51.703473 ignition[1361]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:51.710580 ignition[1361]: INFO : PUT result: OK Nov 12 17:40:51.716479 ignition[1361]: INFO : mount: mount passed Nov 12 17:40:51.718479 ignition[1361]: INFO : Ignition finished successfully Nov 12 17:40:51.723057 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:40:51.733187 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:40:51.765306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:40:51.788036 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Nov 12 17:40:51.792338 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:51.792397 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:51.793692 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:51.799518 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:51.802805 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:40:51.842601 ignition[1390]: INFO : Ignition 2.19.0 Nov 12 17:40:51.842601 ignition[1390]: INFO : Stage: files Nov 12 17:40:51.846465 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:51.846465 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:51.846465 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:51.852928 ignition[1390]: INFO : PUT result: OK Nov 12 17:40:51.857946 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:40:51.861058 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:40:51.861058 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:40:51.879482 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:40:51.882246 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:40:51.885282 unknown[1390]: wrote ssh authorized keys file for user: core Nov 12 17:40:51.889558 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:40:51.889558 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:40:51.889558 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:40:52.121656 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 17:40:52.308477 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:40:52.312261 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:40:52.353903 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Nov 12 17:40:52.789324 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 17:40:53.156119 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Nov 12 17:40:53.156119 ignition[1390]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 17:40:53.168941 ignition[1390]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:40:53.172551 ignition[1390]: INFO : files: files passed Nov 12 17:40:53.172551 ignition[1390]: INFO : Ignition finished successfully Nov 12 17:40:53.176448 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:40:53.198639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:40:53.213895 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:40:53.222951 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:40:53.223973 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:40:53.254526 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:53.254526 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:53.262850 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:53.268231 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:40:53.271829 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:40:53.289298 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:40:53.342379 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:40:53.342804 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:40:53.349414 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:40:53.351536 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:40:53.353614 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:40:53.368091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:40:53.399068 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:40:53.408368 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:40:53.443445 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:53.446786 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:53.451758 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:40:53.454796 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:40:53.455224 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:40:53.458945 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:40:53.467712 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:40:53.470137 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:40:53.475825 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:40:53.478539 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:40:53.481001 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:40:53.489178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:40:53.492511 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:40:53.498194 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:40:53.500293 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:40:53.503742 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:40:53.503972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:40:53.511790 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:53.514448 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:53.521487 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:40:53.525091 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:53.530326 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:40:53.530606 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:40:53.534709 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:40:53.534941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:40:53.537738 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:40:53.537942 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:40:53.558779 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:40:53.575320 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:40:53.575520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:40:53.575758 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:53.576480 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:40:53.576681 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:40:53.600572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:40:53.600773 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:40:53.612050 ignition[1442]: INFO : Ignition 2.19.0 Nov 12 17:40:53.612050 ignition[1442]: INFO : Stage: umount Nov 12 17:40:53.612050 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:53.612050 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:53.612050 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:53.627498 ignition[1442]: INFO : PUT result: OK Nov 12 17:40:53.632883 ignition[1442]: INFO : umount: umount passed Nov 12 17:40:53.632883 ignition[1442]: INFO : Ignition finished successfully Nov 12 17:40:53.639185 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:40:53.639441 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:40:53.641811 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:40:53.641938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:40:53.644131 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:40:53.646068 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:40:53.653762 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 17:40:53.653899 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 17:40:53.659017 systemd[1]: Stopped target network.target - Network. Nov 12 17:40:53.661351 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:40:53.661475 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:40:53.666198 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:40:53.668057 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:40:53.670137 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:53.674381 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:40:53.676146 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:40:53.678079 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:40:53.678163 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:40:53.680151 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:40:53.680219 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:40:53.682217 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:40:53.682313 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:40:53.684366 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:40:53.684455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:40:53.689349 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:40:53.693437 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:40:53.696871 systemd-networkd[1200]: eth0: DHCPv6 lease lost Nov 12 17:40:53.701074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:40:53.707748 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:40:53.708384 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:40:53.715457 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:40:53.716638 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:40:53.727717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:40:53.727835 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:53.744434 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:40:53.749222 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:40:53.750909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:40:53.754632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:40:53.754913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:53.781832 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:40:53.782417 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:53.787743 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:40:53.787833 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:53.790852 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:53.805036 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:40:53.805908 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:40:53.818439 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:40:53.818641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:40:53.826872 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:40:53.831247 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:53.841542 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:40:53.843055 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:40:53.846321 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:40:53.846441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:53.849191 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:40:53.849265 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:53.851320 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:40:53.851406 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:40:53.854016 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:40:53.854132 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:40:53.869733 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:40:53.869896 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:53.890225 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:40:53.894586 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:40:53.894704 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:53.897448 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 17:40:53.897530 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:53.900302 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:40:53.900381 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:53.903144 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:40:53.903220 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:53.944813 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:40:53.945119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:40:53.951466 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:40:53.964581 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:40:53.984516 systemd[1]: Switching root. Nov 12 17:40:54.041093 systemd-journald[251]: Journal stopped Nov 12 17:40:56.635418 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Nov 12 17:40:56.635625 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:40:56.635673 kernel: SELinux: policy capability open_perms=1 Nov 12 17:40:56.635707 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:40:56.635739 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:40:56.635772 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:40:56.635804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:40:56.635837 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:40:56.635871 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:40:56.635904 kernel: audit: type=1403 audit(1731433254.684:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:40:56.635940 systemd[1]: Successfully loaded SELinux policy in 84.282ms. Nov 12 17:40:56.636044 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.140ms. Nov 12 17:40:56.636082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:40:56.636118 systemd[1]: Detected virtualization amazon. Nov 12 17:40:56.636155 systemd[1]: Detected architecture arm64. Nov 12 17:40:56.639047 systemd[1]: Detected first boot. Nov 12 17:40:56.639112 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:40:56.639154 zram_generator::config[1485]: No configuration found. Nov 12 17:40:56.639192 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:40:56.639224 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 17:40:56.639266 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 17:40:56.639299 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 17:40:56.639336 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:40:56.639368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:40:56.639398 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:40:56.639439 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:40:56.639474 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:40:56.639510 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:40:56.639549 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:40:56.639580 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:40:56.639612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:56.639650 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:56.639681 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:40:56.639713 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:40:56.639757 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:40:56.639791 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:40:56.639824 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 17:40:56.639859 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:56.639891 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 17:40:56.639924 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 17:40:56.639959 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 17:40:56.640112 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:40:56.640154 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:56.640188 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:40:56.640221 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:40:56.640255 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:40:56.640286 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:40:56.640320 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:40:56.640354 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:56.640387 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:56.640419 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:56.640454 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:40:56.640487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:40:56.640517 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:40:56.640549 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:40:56.640581 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:40:56.640614 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:40:56.640644 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:40:56.640676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:40:56.640711 systemd[1]: Reached target machines.target - Containers. Nov 12 17:40:56.640743 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:40:56.640774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:56.640805 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:40:56.640836 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:40:56.640877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:40:56.640908 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:40:56.640940 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:40:56.640973 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:40:56.641046 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:40:56.641079 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:40:56.641112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 17:40:56.641145 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 17:40:56.641176 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 17:40:56.641206 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 17:40:56.641235 kernel: fuse: init (API version 7.39) Nov 12 17:40:56.641267 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:40:56.641297 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:40:56.641331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:40:56.641381 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:40:56.641416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:40:56.641450 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 17:40:56.641483 systemd[1]: Stopped verity-setup.service. Nov 12 17:40:56.641513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:40:56.641543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:40:56.641574 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:40:56.641604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:40:56.641639 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:40:56.641670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:40:56.641705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:56.641741 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:40:56.641779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:40:56.641812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:40:56.641846 kernel: ACPI: bus type drm_connector registered Nov 12 17:40:56.641878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:40:56.641909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:40:56.641940 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:40:56.641974 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:40:56.642899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:40:56.642940 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:40:56.643000 kernel: loop: module loaded Nov 12 17:40:56.643042 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:40:56.643073 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:40:56.643103 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:40:56.643133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:56.643163 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:40:56.643197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:40:56.643273 systemd-journald[1567]: Collecting audit messages is disabled. Nov 12 17:40:56.643330 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:40:56.643364 systemd-journald[1567]: Journal started Nov 12 17:40:56.643417 systemd-journald[1567]: Runtime Journal (/run/log/journal/ec2d90ccc262ecd1d3e67c2696a3e640) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:40:56.023024 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:40:56.078320 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 12 17:40:56.079167 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 17:40:56.661103 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:40:56.679804 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:40:56.679893 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:40:56.684233 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:40:56.695066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:40:56.712138 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:40:56.724057 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:40:56.724179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:56.743043 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:40:56.743175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:40:56.755024 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:40:56.765944 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:40:56.766065 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:56.774823 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:40:56.786227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:40:56.801039 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:40:56.803701 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:40:56.806604 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:40:56.810518 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:40:56.813382 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:40:56.884040 kernel: loop0: detected capacity change from 0 to 194512 Nov 12 17:40:56.877123 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:40:56.894899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:40:56.925289 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:40:56.941017 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:40:56.943287 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:40:56.984475 systemd-journald[1567]: Time spent on flushing to /var/log/journal/ec2d90ccc262ecd1d3e67c2696a3e640 is 52.330ms for 914 entries. Nov 12 17:40:56.984475 systemd-journald[1567]: System Journal (/var/log/journal/ec2d90ccc262ecd1d3e67c2696a3e640) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:40:57.052960 systemd-journald[1567]: Received client request to flush runtime journal. Nov 12 17:40:57.053059 kernel: loop1: detected capacity change from 0 to 52536 Nov 12 17:40:57.002673 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:40:57.010529 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:40:57.022677 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:57.028609 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:57.054830 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:40:57.063299 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:40:57.082184 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 17:40:57.095709 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Nov 12 17:40:57.095748 systemd-tmpfiles[1597]: ACLs are not supported, ignoring. Nov 12 17:40:57.111010 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:57.119947 kernel: loop2: detected capacity change from 0 to 114432 Nov 12 17:40:57.128542 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:40:57.207818 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:40:57.223532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:40:57.262047 kernel: loop3: detected capacity change from 0 to 114328 Nov 12 17:40:57.281708 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Nov 12 17:40:57.282898 systemd-tmpfiles[1638]: ACLs are not supported, ignoring. Nov 12 17:40:57.295837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:57.357029 kernel: loop4: detected capacity change from 0 to 194512 Nov 12 17:40:57.407017 kernel: loop5: detected capacity change from 0 to 52536 Nov 12 17:40:57.426375 kernel: loop6: detected capacity change from 0 to 114432 Nov 12 17:40:57.438146 kernel: loop7: detected capacity change from 0 to 114328 Nov 12 17:40:57.452149 (sd-merge)[1642]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 12 17:40:57.453180 (sd-merge)[1642]: Merged extensions into '/usr'. Nov 12 17:40:57.465588 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:40:57.465627 systemd[1]: Reloading... Nov 12 17:40:57.643018 zram_generator::config[1668]: No configuration found. Nov 12 17:40:57.978204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:40:58.096780 systemd[1]: Reloading finished in 629 ms. Nov 12 17:40:58.132044 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:40:58.135076 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:40:58.150313 systemd[1]: Starting ensure-sysext.service... Nov 12 17:40:58.161538 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:40:58.168347 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:58.198263 systemd[1]: Reloading requested from client PID 1720 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:40:58.198296 systemd[1]: Reloading... Nov 12 17:40:58.231274 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:40:58.233070 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:40:58.234964 systemd-tmpfiles[1721]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:40:58.237078 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Nov 12 17:40:58.237236 systemd-tmpfiles[1721]: ACLs are not supported, ignoring. Nov 12 17:40:58.255916 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:40:58.255945 systemd-tmpfiles[1721]: Skipping /boot Nov 12 17:40:58.258406 systemd-udevd[1722]: Using default interface naming scheme 'v255'. Nov 12 17:40:58.301060 systemd-tmpfiles[1721]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:40:58.301089 systemd-tmpfiles[1721]: Skipping /boot Nov 12 17:40:58.406086 zram_generator::config[1760]: No configuration found. Nov 12 17:40:58.546038 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1756) Nov 12 17:40:58.550773 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1756) Nov 12 17:40:58.592741 (udev-worker)[1759]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:40:58.727448 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1782) Nov 12 17:40:58.807016 ldconfig[1589]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:40:58.830326 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:40:58.985078 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 17:40:58.985795 systemd[1]: Reloading finished in 786 ms. Nov 12 17:40:59.020912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:59.024304 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:40:59.028974 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:59.075381 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:40:59.098147 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:40:59.102810 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:40:59.111315 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:40:59.118564 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:40:59.125353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:40:59.136588 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:59.139719 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:40:59.144067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:40:59.151165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:40:59.153356 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:59.159194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:59.159788 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:59.193143 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:40:59.210666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:59.216436 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:40:59.218628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:59.219033 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:40:59.230392 systemd[1]: Finished ensure-sysext.service. Nov 12 17:40:59.293278 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:40:59.296697 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:40:59.297095 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:40:59.300024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:40:59.300324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:40:59.303335 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:40:59.305068 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:40:59.330211 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:40:59.333095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:40:59.367135 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:40:59.370213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:40:59.394844 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:40:59.401017 augenrules[1951]: No rules Nov 12 17:40:59.407635 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:40:59.411134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:40:59.412359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:40:59.419601 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:40:59.430504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:59.430652 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:40:59.432839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:40:59.460075 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:40:59.474819 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:40:59.507832 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:40:59.515111 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:40:59.523086 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:40:59.528691 lvm[1962]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:40:59.586383 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:40:59.586889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:59.598746 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:40:59.616805 lvm[1974]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:40:59.672767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:59.678535 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:40:59.704751 systemd-networkd[1919]: lo: Link UP Nov 12 17:40:59.704777 systemd-networkd[1919]: lo: Gained carrier Nov 12 17:40:59.708311 systemd-networkd[1919]: Enumeration completed Nov 12 17:40:59.708490 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:40:59.716942 systemd-networkd[1919]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:59.716960 systemd-networkd[1919]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:40:59.719461 systemd-networkd[1919]: eth0: Link UP Nov 12 17:40:59.720028 systemd-networkd[1919]: eth0: Gained carrier Nov 12 17:40:59.720175 systemd-networkd[1919]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:59.723402 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:40:59.723745 systemd-resolved[1920]: Positive Trust Anchors: Nov 12 17:40:59.723767 systemd-resolved[1920]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:40:59.723831 systemd-resolved[1920]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:40:59.737360 systemd-resolved[1920]: Defaulting to hostname 'linux'. Nov 12 17:40:59.739091 systemd-networkd[1919]: eth0: DHCPv4 address 172.31.27.255/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:40:59.741254 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:40:59.745583 systemd[1]: Reached target network.target - Network. Nov 12 17:40:59.749409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:59.754503 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:40:59.757400 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:40:59.760405 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:40:59.763318 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:40:59.766439 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:40:59.768791 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:40:59.771216 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:40:59.771268 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:40:59.773006 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:40:59.775758 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:40:59.780249 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:40:59.796281 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:40:59.799456 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:40:59.801835 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:40:59.803854 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:40:59.805809 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:40:59.805866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:40:59.812264 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:40:59.823311 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 17:40:59.829363 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:40:59.836258 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:40:59.844326 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:40:59.846385 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:40:59.849694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:40:59.863390 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 17:40:59.870281 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:40:59.878220 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 17:40:59.886192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:40:59.895355 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:40:59.899643 jq[1986]: false Nov 12 17:40:59.910399 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:40:59.913318 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:40:59.915523 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:40:59.918590 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:40:59.928273 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:40:59.936796 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:40:59.937593 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:40:59.985764 dbus-daemon[1985]: [system] SELinux support is enabled Nov 12 17:40:59.986114 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:40:59.997623 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:40:59.997671 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:41:00.001215 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:41:00.001257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:41:00.018639 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1919 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 17:41:00.063890 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:41:00.067403 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:41:00.067403 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:41:00.065352 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 17:41:00.063945 ntpd[1989]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:41:00.063965 ntpd[1989]: ---------------------------------------------------- Nov 12 17:41:00.068777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: ---------------------------------------------------- Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: corporation. Support and training for ntp-4 are Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: available at https://www.nwtime.org/support Nov 12 17:41:00.079190 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: ---------------------------------------------------- Nov 12 17:41:00.070056 ntpd[1989]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:41:00.071913 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:41:00.070087 ntpd[1989]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:41:00.070107 ntpd[1989]: corporation. Support and training for ntp-4 are Nov 12 17:41:00.070126 ntpd[1989]: available at https://www.nwtime.org/support Nov 12 17:41:00.070144 ntpd[1989]: ---------------------------------------------------- Nov 12 17:41:00.082808 ntpd[1989]: proto: precision = 0.096 usec (-23) Nov 12 17:41:00.095200 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: proto: precision = 0.096 usec (-23) Nov 12 17:41:00.095200 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: basedate set to 2024-10-31 Nov 12 17:41:00.095200 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: gps base set to 2024-11-03 (week 2339) Nov 12 17:41:00.093409 ntpd[1989]: basedate set to 2024-10-31 Nov 12 17:41:00.093447 ntpd[1989]: gps base set to 2024-11-03 (week 2339) Nov 12 17:41:00.110179 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:41:00.116180 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:41:00.116180 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:41:00.110277 ntpd[1989]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:41:00.116559 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listen normally on 3 eth0 172.31.27.255:123 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listen normally on 4 lo [::1]:123 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: bind(21) AF_INET6 fe80::422:20ff:fe30:ae15%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: unable to create socket on eth0 (5) for fe80::422:20ff:fe30:ae15%2#123 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: failed to init interface for address fe80::422:20ff:fe30:ae15%2 Nov 12 17:41:00.125014 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Nov 12 17:41:00.123154 ntpd[1989]: Listen normally on 3 eth0 172.31.27.255:123 Nov 12 17:41:00.123233 ntpd[1989]: Listen normally on 4 lo [::1]:123 Nov 12 17:41:00.123314 ntpd[1989]: bind(21) AF_INET6 fe80::422:20ff:fe30:ae15%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:41:00.123352 ntpd[1989]: unable to create socket on eth0 (5) for fe80::422:20ff:fe30:ae15%2#123 Nov 12 17:41:00.123380 ntpd[1989]: failed to init interface for address fe80::422:20ff:fe30:ae15%2 Nov 12 17:41:00.123442 ntpd[1989]: Listening on routing socket on fd #21 for interface updates Nov 12 17:41:00.141762 jq[1999]: true Nov 12 17:41:00.151171 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:41:00.164954 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:41:00.164954 ntpd[1989]: 12 Nov 17:41:00 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:41:00.149787 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:41:00.154431 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:41:00.149834 ntpd[1989]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:41:00.156706 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:41:00.167638 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:41:00.193379 update_engine[1998]: I20241112 17:41:00.176901 1998 main.cc:92] Flatcar Update Engine starting Nov 12 17:41:00.203349 tar[2009]: linux-arm64/helm Nov 12 17:41:00.217026 extend-filesystems[1987]: Found loop4 Nov 12 17:41:00.217026 extend-filesystems[1987]: Found loop5 Nov 12 17:41:00.217026 extend-filesystems[1987]: Found loop6 Nov 12 17:41:00.217026 extend-filesystems[1987]: Found loop7 Nov 12 17:41:00.217026 extend-filesystems[1987]: Found nvme0n1 Nov 12 17:41:00.210808 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p1 Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p2 Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p3 Nov 12 17:41:00.226455 extend-filesystems[1987]: Found usr Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p4 Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p6 Nov 12 17:41:00.226455 extend-filesystems[1987]: Found nvme0n1p7 Nov 12 17:41:00.252845 update_engine[1998]: I20241112 17:41:00.239279 1998 update_check_scheduler.cc:74] Next update check in 7m29s Nov 12 17:41:00.245911 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:41:00.256216 extend-filesystems[1987]: Found nvme0n1p9 Nov 12 17:41:00.257870 extend-filesystems[1987]: Checking size of /dev/nvme0n1p9 Nov 12 17:41:00.260681 jq[2025]: true Nov 12 17:41:00.262971 systemd-logind[1996]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:41:00.263043 systemd-logind[1996]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 12 17:41:00.264853 systemd-logind[1996]: New seat seat0. Nov 12 17:41:00.271157 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:41:00.313623 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 17:41:00.361949 extend-filesystems[1987]: Resized partition /dev/nvme0n1p9 Nov 12 17:41:00.370482 extend-filesystems[2042]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:41:00.377668 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Nov 12 17:41:00.397009 coreos-metadata[1984]: Nov 12 17:41:00.392 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:41:00.401531 coreos-metadata[1984]: Nov 12 17:41:00.398 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 12 17:41:00.403099 coreos-metadata[1984]: Nov 12 17:41:00.402 INFO Fetch successful Nov 12 17:41:00.403099 coreos-metadata[1984]: Nov 12 17:41:00.402 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 12 17:41:00.404357 coreos-metadata[1984]: Nov 12 17:41:00.404 INFO Fetch successful Nov 12 17:41:00.404357 coreos-metadata[1984]: Nov 12 17:41:00.404 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 12 17:41:00.406240 coreos-metadata[1984]: Nov 12 17:41:00.405 INFO Fetch successful Nov 12 17:41:00.406240 coreos-metadata[1984]: Nov 12 17:41:00.405 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 12 17:41:00.407526 coreos-metadata[1984]: Nov 12 17:41:00.407 INFO Fetch successful Nov 12 17:41:00.407526 coreos-metadata[1984]: Nov 12 17:41:00.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 12 17:41:00.418918 coreos-metadata[1984]: Nov 12 17:41:00.413 INFO Fetch failed with 404: resource not found Nov 12 17:41:00.418918 coreos-metadata[1984]: Nov 12 17:41:00.413 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 12 17:41:00.418918 coreos-metadata[1984]: Nov 12 17:41:00.416 INFO Fetch successful Nov 12 17:41:00.418918 coreos-metadata[1984]: Nov 12 17:41:00.416 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 12 17:41:00.424008 coreos-metadata[1984]: Nov 12 17:41:00.421 INFO Fetch successful Nov 12 17:41:00.424008 coreos-metadata[1984]: Nov 12 17:41:00.421 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 12 17:41:00.424802 coreos-metadata[1984]: Nov 12 17:41:00.424 INFO Fetch successful Nov 12 17:41:00.424802 coreos-metadata[1984]: Nov 12 17:41:00.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 12 17:41:00.430797 coreos-metadata[1984]: Nov 12 17:41:00.428 INFO Fetch successful Nov 12 17:41:00.430797 coreos-metadata[1984]: Nov 12 17:41:00.428 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 12 17:41:00.433017 coreos-metadata[1984]: Nov 12 17:41:00.432 INFO Fetch successful Nov 12 17:41:00.455578 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Nov 12 17:41:00.591448 extend-filesystems[2042]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 12 17:41:00.591448 extend-filesystems[2042]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:41:00.591448 extend-filesystems[2042]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Nov 12 17:41:00.585088 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:41:00.617731 extend-filesystems[1987]: Resized filesystem in /dev/nvme0n1p9 Nov 12 17:41:00.588120 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:41:00.621032 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1759) Nov 12 17:41:00.633474 bash[2059]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:41:00.638094 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:41:00.659431 systemd[1]: Starting sshkeys.service... Nov 12 17:41:00.689216 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:41:00.706623 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 17:41:00.709933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:41:00.765643 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 17:41:00.801297 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 17:41:00.809563 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 17:41:00.810120 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 17:41:00.816274 dbus-daemon[1985]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2014 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 17:41:00.833809 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 17:41:00.907894 containerd[2016]: time="2024-11-12T17:41:00.904398300Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:41:00.930141 polkitd[2134]: Started polkitd version 121 Nov 12 17:41:00.967776 polkitd[2134]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 17:41:00.971521 polkitd[2134]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 17:41:00.975390 polkitd[2134]: Finished loading, compiling and executing 2 rules Nov 12 17:41:00.980820 dbus-daemon[1985]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 17:41:00.981168 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 17:41:00.983265 polkitd[2134]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 17:41:01.034727 systemd-hostnamed[2014]: Hostname set to (transient) Nov 12 17:41:01.034752 systemd-resolved[1920]: System hostname changed to 'ip-172-31-27-255'. Nov 12 17:41:01.070748 ntpd[1989]: bind(24) AF_INET6 fe80::422:20ff:fe30:ae15%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:41:01.071433 ntpd[1989]: 12 Nov 17:41:01 ntpd[1989]: bind(24) AF_INET6 fe80::422:20ff:fe30:ae15%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:41:01.071433 ntpd[1989]: 12 Nov 17:41:01 ntpd[1989]: unable to create socket on eth0 (6) for fe80::422:20ff:fe30:ae15%2#123 Nov 12 17:41:01.071433 ntpd[1989]: 12 Nov 17:41:01 ntpd[1989]: failed to init interface for address fe80::422:20ff:fe30:ae15%2 Nov 12 17:41:01.070811 ntpd[1989]: unable to create socket on eth0 (6) for fe80::422:20ff:fe30:ae15%2#123 Nov 12 17:41:01.070839 ntpd[1989]: failed to init interface for address fe80::422:20ff:fe30:ae15%2 Nov 12 17:41:01.110946 containerd[2016]: time="2024-11-12T17:41:01.109698213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.116808 containerd[2016]: time="2024-11-12T17:41:01.116702433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:01.116975 containerd[2016]: time="2024-11-12T17:41:01.116944569Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:41:01.117177 containerd[2016]: time="2024-11-12T17:41:01.117146493Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:41:01.117749 containerd[2016]: time="2024-11-12T17:41:01.117712401Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:41:01.117901 containerd[2016]: time="2024-11-12T17:41:01.117873597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.118305 containerd[2016]: time="2024-11-12T17:41:01.118268769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:01.118457 containerd[2016]: time="2024-11-12T17:41:01.118427709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.119098 containerd[2016]: time="2024-11-12T17:41:01.118944729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:01.119245 containerd[2016]: time="2024-11-12T17:41:01.119216613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.119378 containerd[2016]: time="2024-11-12T17:41:01.119348001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:01.119542 containerd[2016]: time="2024-11-12T17:41:01.119427489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.119993 containerd[2016]: time="2024-11-12T17:41:01.119820573Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.120612 containerd[2016]: time="2024-11-12T17:41:01.120523005Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:01.121341 containerd[2016]: time="2024-11-12T17:41:01.120890589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:01.121341 containerd[2016]: time="2024-11-12T17:41:01.120960681Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:41:01.121341 containerd[2016]: time="2024-11-12T17:41:01.121221909Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:41:01.121603 containerd[2016]: time="2024-11-12T17:41:01.121572861Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.131490477Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.131596641Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.131712273Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.131754741Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.131789013Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:41:01.133033 containerd[2016]: time="2024-11-12T17:41:01.132758097Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.151841301Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152173581Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152226489Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152269929Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152313333Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152354061Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152386401Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152427981Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152474037Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152515737Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152555217Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152591889Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152645085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.156041 containerd[2016]: time="2024-11-12T17:41:01.152686905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.156771 coreos-metadata[2128]: Nov 12 17:41:01.155 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152721513Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152765961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152822409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152871201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152914965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.152955873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153017649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153069573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153101745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153141465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153189993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153238917Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153293529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153349725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157199 containerd[2016]: time="2024-11-12T17:41:01.153392817Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153522621Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153575289Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153604581Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153643557Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153679941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153720573Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153747393Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:41:01.157842 containerd[2016]: time="2024-11-12T17:41:01.153782109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:41:01.165624 coreos-metadata[2128]: Nov 12 17:41:01.158 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 12 17:41:01.165624 coreos-metadata[2128]: Nov 12 17:41:01.159 INFO Fetch successful Nov 12 17:41:01.165624 coreos-metadata[2128]: Nov 12 17:41:01.159 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 12 17:41:01.165624 coreos-metadata[2128]: Nov 12 17:41:01.160 INFO Fetch successful Nov 12 17:41:01.165107 unknown[2128]: wrote ssh authorized keys file for user: core Nov 12 17:41:01.168243 containerd[2016]: time="2024-11-12T17:41:01.167394237Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:41:01.168243 containerd[2016]: time="2024-11-12T17:41:01.167518593Z" level=info msg="Connect containerd service" Nov 12 17:41:01.168243 containerd[2016]: time="2024-11-12T17:41:01.167579985Z" level=info msg="using legacy CRI server" Nov 12 17:41:01.168243 containerd[2016]: time="2024-11-12T17:41:01.167599065Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:41:01.168243 containerd[2016]: time="2024-11-12T17:41:01.167737209Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:41:01.183348 containerd[2016]: time="2024-11-12T17:41:01.177873345Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:41:01.183348 containerd[2016]: time="2024-11-12T17:41:01.180482121Z" level=info msg="Start subscribing containerd event" Nov 12 17:41:01.183348 containerd[2016]: time="2024-11-12T17:41:01.180801189Z" level=info msg="Start recovering state" Nov 12 17:41:01.183348 containerd[2016]: time="2024-11-12T17:41:01.183139617Z" level=info msg="Start event monitor" Nov 12 17:41:01.183944 containerd[2016]: time="2024-11-12T17:41:01.183311121Z" level=info msg="Start snapshots syncer" Nov 12 17:41:01.184123 containerd[2016]: time="2024-11-12T17:41:01.183740085Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:41:01.185929 containerd[2016]: time="2024-11-12T17:41:01.185307465Z" level=info msg="Start streaming server" Nov 12 17:41:01.189473 containerd[2016]: time="2024-11-12T17:41:01.184661925Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:41:01.189473 containerd[2016]: time="2024-11-12T17:41:01.186883437Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:41:01.189473 containerd[2016]: time="2024-11-12T17:41:01.187011093Z" level=info msg="containerd successfully booted in 0.288606s" Nov 12 17:41:01.196917 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:41:01.225239 update-ssh-keys[2178]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:41:01.230573 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 17:41:01.243158 systemd[1]: Finished sshkeys.service. Nov 12 17:41:01.297142 systemd-networkd[1919]: eth0: Gained IPv6LL Nov 12 17:41:01.311923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:41:01.317932 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:41:01.326478 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 12 17:41:01.341280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:01.350640 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:41:01.452073 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:41:01.509224 amazon-ssm-agent[2192]: Initializing new seelog logger Nov 12 17:41:01.509708 amazon-ssm-agent[2192]: New Seelog Logger Creation Complete Nov 12 17:41:01.509708 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.509708 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.512551 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 processing appconfig overrides Nov 12 17:41:01.513192 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.513192 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.513192 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 processing appconfig overrides Nov 12 17:41:01.513192 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.513192 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.513454 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 processing appconfig overrides Nov 12 17:41:01.514458 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO Proxy environment variables: Nov 12 17:41:01.524028 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.524028 amazon-ssm-agent[2192]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:01.524028 amazon-ssm-agent[2192]: 2024/11/12 17:41:01 processing appconfig overrides Nov 12 17:41:01.620276 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO no_proxy: Nov 12 17:41:01.719544 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO https_proxy: Nov 12 17:41:01.817526 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO http_proxy: Nov 12 17:41:01.916661 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO Checking if agent identity type OnPrem can be assumed Nov 12 17:41:02.017064 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO Checking if agent identity type EC2 can be assumed Nov 12 17:41:02.118288 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO Agent will take identity from EC2 Nov 12 17:41:02.196595 tar[2009]: linux-arm64/LICENSE Nov 12 17:41:02.199183 tar[2009]: linux-arm64/README.md Nov 12 17:41:02.218028 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:02.234926 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:41:02.316896 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:02.417395 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:02.455453 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] Starting Core Agent Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [Registrar] Starting registrar module Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:02 INFO [EC2Identity] EC2 registration was successful. Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:02 INFO [CredentialRefresher] credentialRefresher has started Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:02 INFO [CredentialRefresher] Starting credentials refresher loop Nov 12 17:41:02.457030 amazon-ssm-agent[2192]: 2024-11-12 17:41:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 12 17:41:02.516645 amazon-ssm-agent[2192]: 2024-11-12 17:41:02 INFO [CredentialRefresher] Next credential rotation will be in 30.208282592733333 minutes Nov 12 17:41:02.903439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:02.925166 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:03.505649 amazon-ssm-agent[2192]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 12 17:41:03.606634 amazon-ssm-agent[2192]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2224) started Nov 12 17:41:03.706900 amazon-ssm-agent[2192]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 12 17:41:04.033444 kubelet[2218]: E1112 17:41:04.033337 2218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:04.037896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:04.038297 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:04.038748 systemd[1]: kubelet.service: Consumed 1.311s CPU time. Nov 12 17:41:04.070695 ntpd[1989]: Listen normally on 7 eth0 [fe80::422:20ff:fe30:ae15%2]:123 Nov 12 17:41:04.071478 ntpd[1989]: 12 Nov 17:41:04 ntpd[1989]: Listen normally on 7 eth0 [fe80::422:20ff:fe30:ae15%2]:123 Nov 12 17:41:06.518350 sshd_keygen[2028]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:41:06.557036 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:41:06.573725 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:41:06.581948 systemd[1]: Started sshd@0-172.31.27.255:22-139.178.89.65:54486.service - OpenSSH per-connection server daemon (139.178.89.65:54486). Nov 12 17:41:06.597042 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:41:06.602315 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:41:06.619341 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:41:06.644109 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:41:06.664604 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:41:06.669342 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 17:41:06.671831 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:41:06.673789 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:41:06.678113 systemd[1]: Startup finished in 1.188s (kernel) + 8.850s (initrd) + 12.076s (userspace) = 22.116s. Nov 12 17:41:06.804423 sshd[2247]: Accepted publickey for core from 139.178.89.65 port 54486 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:06.808433 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:06.828520 systemd-logind[1996]: New session 1 of user core. Nov 12 17:41:06.829718 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:41:06.835528 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:41:06.871535 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:41:06.881585 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:41:06.899398 (systemd)[2262]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:41:06.769648 systemd-resolved[1920]: Clock change detected. Flushing caches. Nov 12 17:41:06.777592 systemd-journald[1567]: Time jumped backwards, rotating. Nov 12 17:41:06.825535 systemd[2262]: Queued start job for default target default.target. Nov 12 17:41:06.837422 systemd[2262]: Created slice app.slice - User Application Slice. Nov 12 17:41:06.837526 systemd[2262]: Reached target paths.target - Paths. Nov 12 17:41:06.837560 systemd[2262]: Reached target timers.target - Timers. Nov 12 17:41:06.840369 systemd[2262]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:41:06.867715 systemd[2262]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:41:06.868211 systemd[2262]: Reached target sockets.target - Sockets. Nov 12 17:41:06.868361 systemd[2262]: Reached target basic.target - Basic System. Nov 12 17:41:06.868553 systemd[2262]: Reached target default.target - Main User Target. Nov 12 17:41:06.868740 systemd[2262]: Startup finished in 258ms. Nov 12 17:41:06.869068 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:41:06.879114 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:41:07.043033 systemd[1]: Started sshd@1-172.31.27.255:22-139.178.89.65:53506.service - OpenSSH per-connection server daemon (139.178.89.65:53506). Nov 12 17:41:07.217793 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 53506 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:07.220760 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:07.230243 systemd-logind[1996]: New session 2 of user core. Nov 12 17:41:07.240238 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:41:07.371272 sshd[2274]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:07.378771 systemd[1]: sshd@1-172.31.27.255:22-139.178.89.65:53506.service: Deactivated successfully. Nov 12 17:41:07.382797 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:41:07.385432 systemd-logind[1996]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:41:07.387613 systemd-logind[1996]: Removed session 2. Nov 12 17:41:07.410465 systemd[1]: Started sshd@2-172.31.27.255:22-139.178.89.65:53514.service - OpenSSH per-connection server daemon (139.178.89.65:53514). Nov 12 17:41:07.604952 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 53514 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:07.608300 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:07.618309 systemd-logind[1996]: New session 3 of user core. Nov 12 17:41:07.628647 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:41:07.752796 sshd[2281]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:07.759346 systemd[1]: sshd@2-172.31.27.255:22-139.178.89.65:53514.service: Deactivated successfully. Nov 12 17:41:07.763044 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:41:07.768079 systemd-logind[1996]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:41:07.770657 systemd-logind[1996]: Removed session 3. Nov 12 17:41:07.793477 systemd[1]: Started sshd@3-172.31.27.255:22-139.178.89.65:53524.service - OpenSSH per-connection server daemon (139.178.89.65:53524). Nov 12 17:41:07.985938 sshd[2288]: Accepted publickey for core from 139.178.89.65 port 53524 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:07.989646 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:07.999932 systemd-logind[1996]: New session 4 of user core. Nov 12 17:41:08.011233 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:41:08.145036 sshd[2288]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:08.152075 systemd[1]: sshd@3-172.31.27.255:22-139.178.89.65:53524.service: Deactivated successfully. Nov 12 17:41:08.157766 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:41:08.159116 systemd-logind[1996]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:41:08.160985 systemd-logind[1996]: Removed session 4. Nov 12 17:41:08.180370 systemd[1]: Started sshd@4-172.31.27.255:22-139.178.89.65:53540.service - OpenSSH per-connection server daemon (139.178.89.65:53540). Nov 12 17:41:08.366068 sshd[2295]: Accepted publickey for core from 139.178.89.65 port 53540 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:08.368128 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:08.377192 systemd-logind[1996]: New session 5 of user core. Nov 12 17:41:08.393114 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:41:08.511357 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:41:08.512469 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:08.532452 sudo[2298]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:08.556686 sshd[2295]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:08.562155 systemd[1]: sshd@4-172.31.27.255:22-139.178.89.65:53540.service: Deactivated successfully. Nov 12 17:41:08.565435 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:41:08.567908 systemd-logind[1996]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:41:08.569738 systemd-logind[1996]: Removed session 5. Nov 12 17:41:08.600330 systemd[1]: Started sshd@5-172.31.27.255:22-139.178.89.65:53542.service - OpenSSH per-connection server daemon (139.178.89.65:53542). Nov 12 17:41:08.772438 sshd[2303]: Accepted publickey for core from 139.178.89.65 port 53542 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:08.774532 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:08.784231 systemd-logind[1996]: New session 6 of user core. Nov 12 17:41:08.792192 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:41:08.899440 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:41:08.900123 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:08.907213 sudo[2307]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:08.917908 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:41:08.918559 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:08.944373 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:41:08.948907 auditctl[2310]: No rules Nov 12 17:41:08.948928 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:41:08.949282 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:41:08.964904 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:41:09.006613 augenrules[2328]: No rules Nov 12 17:41:09.009126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:41:09.011910 sudo[2306]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:09.036135 sshd[2303]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:09.042274 systemd[1]: sshd@5-172.31.27.255:22-139.178.89.65:53542.service: Deactivated successfully. Nov 12 17:41:09.045339 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:41:09.048977 systemd-logind[1996]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:41:09.050900 systemd-logind[1996]: Removed session 6. Nov 12 17:41:09.070107 systemd[1]: Started sshd@6-172.31.27.255:22-139.178.89.65:53558.service - OpenSSH per-connection server daemon (139.178.89.65:53558). Nov 12 17:41:09.252314 sshd[2336]: Accepted publickey for core from 139.178.89.65 port 53558 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:09.255345 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:09.264206 systemd-logind[1996]: New session 7 of user core. Nov 12 17:41:09.275132 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:41:09.378308 sudo[2339]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:41:09.379634 sudo[2339]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:09.824661 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:41:09.825769 (dockerd)[2355]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:41:10.186666 dockerd[2355]: time="2024-11-12T17:41:10.186489144Z" level=info msg="Starting up" Nov 12 17:41:10.291802 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3552393659-merged.mount: Deactivated successfully. Nov 12 17:41:10.330568 systemd[1]: var-lib-docker-metacopy\x2dcheck3496117477-merged.mount: Deactivated successfully. Nov 12 17:41:10.343550 dockerd[2355]: time="2024-11-12T17:41:10.343455733Z" level=info msg="Loading containers: start." Nov 12 17:41:10.497882 kernel: Initializing XFRM netlink socket Nov 12 17:41:10.529140 (udev-worker)[2376]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:41:10.621589 systemd-networkd[1919]: docker0: Link UP Nov 12 17:41:10.644309 dockerd[2355]: time="2024-11-12T17:41:10.644239803Z" level=info msg="Loading containers: done." Nov 12 17:41:10.671989 dockerd[2355]: time="2024-11-12T17:41:10.671924895Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:41:10.672303 dockerd[2355]: time="2024-11-12T17:41:10.672071379Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:41:10.672303 dockerd[2355]: time="2024-11-12T17:41:10.672280035Z" level=info msg="Daemon has completed initialization" Nov 12 17:41:10.726952 dockerd[2355]: time="2024-11-12T17:41:10.726756471Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:41:10.727060 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:41:11.285781 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1410291600-merged.mount: Deactivated successfully. Nov 12 17:41:11.799813 containerd[2016]: time="2024-11-12T17:41:11.799339156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 17:41:13.987573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:41:14.001256 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:14.266452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:14.284539 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:14.378129 kubelet[2507]: E1112 17:41:14.378002 2507 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:14.388061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:14.388964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:14.509684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount644518787.mount: Deactivated successfully. Nov 12 17:41:17.154019 containerd[2016]: time="2024-11-12T17:41:17.153949603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:17.156124 containerd[2016]: time="2024-11-12T17:41:17.156054847Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=32201615" Nov 12 17:41:17.156875 containerd[2016]: time="2024-11-12T17:41:17.156515839Z" level=info msg="ImageCreate event name:\"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:17.162232 containerd[2016]: time="2024-11-12T17:41:17.162144811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:17.164941 containerd[2016]: time="2024-11-12T17:41:17.164590879Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"32198415\" in 5.365183479s" Nov 12 17:41:17.164941 containerd[2016]: time="2024-11-12T17:41:17.164660875Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:001ac07c2bb7d0e08d405a19d935c926c393c971a2801756755b8958a7306ca0\"" Nov 12 17:41:17.202555 containerd[2016]: time="2024-11-12T17:41:17.202488259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 17:41:20.696907 containerd[2016]: time="2024-11-12T17:41:20.696822589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.697538 containerd[2016]: time="2024-11-12T17:41:20.697494157Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=29381044" Nov 12 17:41:20.699682 containerd[2016]: time="2024-11-12T17:41:20.699582517Z" level=info msg="ImageCreate event name:\"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.705398 containerd[2016]: time="2024-11-12T17:41:20.705300193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.708060 containerd[2016]: time="2024-11-12T17:41:20.707682805Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"30783669\" in 3.505128594s" Nov 12 17:41:20.708060 containerd[2016]: time="2024-11-12T17:41:20.707745721Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:27bef186b28e50ade2a010ef9201877431fb732ef6e370cb79149e8bd65220d7\"" Nov 12 17:41:20.748438 containerd[2016]: time="2024-11-12T17:41:20.748128661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 17:41:22.724882 containerd[2016]: time="2024-11-12T17:41:22.723014679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.725748 containerd[2016]: time="2024-11-12T17:41:22.725703975Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=15770288" Nov 12 17:41:22.726818 containerd[2016]: time="2024-11-12T17:41:22.726774063Z" level=info msg="ImageCreate event name:\"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.732633 containerd[2016]: time="2024-11-12T17:41:22.732565443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.735288 containerd[2016]: time="2024-11-12T17:41:22.735234723Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"17172931\" in 1.98704455s" Nov 12 17:41:22.735452 containerd[2016]: time="2024-11-12T17:41:22.735421143Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:a8e5012443313f8a99b528b68845e2bcb151785ed5c057613dad7ca5b03c7e60\"" Nov 12 17:41:22.774114 containerd[2016]: time="2024-11-12T17:41:22.774040407Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 17:41:24.102870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957646878.mount: Deactivated successfully. Nov 12 17:41:24.639395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:41:24.651204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:24.954552 containerd[2016]: time="2024-11-12T17:41:24.954480666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:24.956599 containerd[2016]: time="2024-11-12T17:41:24.956539014Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=25272229" Nov 12 17:41:24.958041 containerd[2016]: time="2024-11-12T17:41:24.957981030Z" level=info msg="ImageCreate event name:\"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:24.964871 containerd[2016]: time="2024-11-12T17:41:24.962708850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:24.964871 containerd[2016]: time="2024-11-12T17:41:24.964742442Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"25271248\" in 2.190467567s" Nov 12 17:41:24.964871 containerd[2016]: time="2024-11-12T17:41:24.964802046Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:4e66440765478454d48b169d648b000501e24066c0bad7c378bd9e8506bb919f\"" Nov 12 17:41:24.975159 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:24.985730 (kubelet)[2606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:25.014635 containerd[2016]: time="2024-11-12T17:41:25.014565590Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:41:25.080990 kubelet[2606]: E1112 17:41:25.080869 2606 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:25.087135 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:25.087466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:25.634622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239517193.mount: Deactivated successfully. Nov 12 17:41:26.672614 containerd[2016]: time="2024-11-12T17:41:26.672161526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.674376 containerd[2016]: time="2024-11-12T17:41:26.674306682Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Nov 12 17:41:26.675396 containerd[2016]: time="2024-11-12T17:41:26.675351954Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.681345 containerd[2016]: time="2024-11-12T17:41:26.681249690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.683931 containerd[2016]: time="2024-11-12T17:41:26.683721042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.668666248s" Nov 12 17:41:26.683931 containerd[2016]: time="2024-11-12T17:41:26.683776410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:41:26.723574 containerd[2016]: time="2024-11-12T17:41:26.723513835Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:41:27.242149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738747940.mount: Deactivated successfully. Nov 12 17:41:27.248639 containerd[2016]: time="2024-11-12T17:41:27.248574989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:27.250307 containerd[2016]: time="2024-11-12T17:41:27.250246145Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Nov 12 17:41:27.250995 containerd[2016]: time="2024-11-12T17:41:27.250721177Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:27.255186 containerd[2016]: time="2024-11-12T17:41:27.255099113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:27.257208 containerd[2016]: time="2024-11-12T17:41:27.257017325Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 533.443682ms" Nov 12 17:41:27.257208 containerd[2016]: time="2024-11-12T17:41:27.257069165Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:41:27.298305 containerd[2016]: time="2024-11-12T17:41:27.298231445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 17:41:27.879631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389239526.mount: Deactivated successfully. Nov 12 17:41:30.770195 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 17:41:32.044510 containerd[2016]: time="2024-11-12T17:41:32.044428641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:32.046726 containerd[2016]: time="2024-11-12T17:41:32.046655685Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Nov 12 17:41:32.048087 containerd[2016]: time="2024-11-12T17:41:32.048019617Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:32.054206 containerd[2016]: time="2024-11-12T17:41:32.054123201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:32.056704 containerd[2016]: time="2024-11-12T17:41:32.056656449Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.75836642s" Nov 12 17:41:32.056999 containerd[2016]: time="2024-11-12T17:41:32.056853705Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Nov 12 17:41:35.285770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 17:41:35.295005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:35.580269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:35.589338 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:35.672121 kubelet[2793]: E1112 17:41:35.672032 2793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:35.677619 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:35.678003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:40.323930 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:40.334366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:40.380325 systemd[1]: Reloading requested from client PID 2807 ('systemctl') (unit session-7.scope)... Nov 12 17:41:40.380361 systemd[1]: Reloading... Nov 12 17:41:40.623884 zram_generator::config[2850]: No configuration found. Nov 12 17:41:40.850045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:41:41.020494 systemd[1]: Reloading finished in 639 ms. Nov 12 17:41:41.101374 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 17:41:41.101545 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 17:41:41.102699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:41.118074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:41.384502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:41.400432 (kubelet)[2909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:41:41.481175 kubelet[2909]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:41.482872 kubelet[2909]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:41:41.482872 kubelet[2909]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:41.482872 kubelet[2909]: I1112 17:41:41.481772 2909 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:41:42.207315 kubelet[2909]: I1112 17:41:42.207272 2909 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:41:42.207530 kubelet[2909]: I1112 17:41:42.207506 2909 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:41:42.207996 kubelet[2909]: I1112 17:41:42.207958 2909 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:41:42.237187 kubelet[2909]: I1112 17:41:42.237133 2909 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:41:42.237564 kubelet[2909]: E1112 17:41:42.237524 2909 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.252498 kubelet[2909]: I1112 17:41:42.252459 2909 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:41:42.253144 kubelet[2909]: I1112 17:41:42.253119 2909 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:41:42.253564 kubelet[2909]: I1112 17:41:42.253531 2909 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:41:42.253744 kubelet[2909]: I1112 17:41:42.253724 2909 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:41:42.253887 kubelet[2909]: I1112 17:41:42.253830 2909 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:41:42.255147 kubelet[2909]: I1112 17:41:42.255119 2909 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:42.259893 kubelet[2909]: I1112 17:41:42.259858 2909 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:41:42.260052 kubelet[2909]: I1112 17:41:42.260032 2909 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:41:42.260175 kubelet[2909]: I1112 17:41:42.260156 2909 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:41:42.260301 kubelet[2909]: I1112 17:41:42.260278 2909 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:41:42.263609 kubelet[2909]: W1112 17:41:42.263528 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-255&limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.263864 kubelet[2909]: E1112 17:41:42.263818 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-255&limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.265763 kubelet[2909]: I1112 17:41:42.265712 2909 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:41:42.267906 kubelet[2909]: I1112 17:41:42.266720 2909 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:41:42.267906 kubelet[2909]: W1112 17:41:42.266833 2909 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:41:42.271039 kubelet[2909]: W1112 17:41:42.270951 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.271039 kubelet[2909]: E1112 17:41:42.271047 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.272507 kubelet[2909]: I1112 17:41:42.272456 2909 server.go:1256] "Started kubelet" Nov 12 17:41:42.277305 kubelet[2909]: I1112 17:41:42.277260 2909 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:41:42.279082 kubelet[2909]: I1112 17:41:42.279023 2909 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:41:42.280443 kubelet[2909]: I1112 17:41:42.280385 2909 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:41:42.284062 kubelet[2909]: I1112 17:41:42.283967 2909 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:41:42.287260 kubelet[2909]: I1112 17:41:42.287193 2909 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:41:42.292925 kubelet[2909]: E1112 17:41:42.292820 2909 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.255:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.255:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-255.1807496a363a96a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-255,UID:ip-172-31-27-255,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-255,},FirstTimestamp:2024-11-12 17:41:42.272415392 +0000 UTC m=+0.862632677,LastTimestamp:2024-11-12 17:41:42.272415392 +0000 UTC m=+0.862632677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-255,}" Nov 12 17:41:42.293446 kubelet[2909]: I1112 17:41:42.293402 2909 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:41:42.295664 kubelet[2909]: I1112 17:41:42.295606 2909 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:41:42.296881 kubelet[2909]: I1112 17:41:42.296007 2909 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:41:42.297554 kubelet[2909]: W1112 17:41:42.297419 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.297672 kubelet[2909]: E1112 17:41:42.297569 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.298050 kubelet[2909]: E1112 17:41:42.298003 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": dial tcp 172.31.27.255:6443: connect: connection refused" interval="200ms" Nov 12 17:41:42.298128 kubelet[2909]: E1112 17:41:42.298116 2909 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:41:42.299312 kubelet[2909]: I1112 17:41:42.299245 2909 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:41:42.299543 kubelet[2909]: I1112 17:41:42.299486 2909 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:41:42.302418 kubelet[2909]: I1112 17:41:42.302368 2909 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:41:42.329628 kubelet[2909]: I1112 17:41:42.329578 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:41:42.332623 kubelet[2909]: I1112 17:41:42.332411 2909 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:41:42.332623 kubelet[2909]: I1112 17:41:42.332464 2909 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:41:42.332623 kubelet[2909]: I1112 17:41:42.332501 2909 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:41:42.332623 kubelet[2909]: E1112 17:41:42.332604 2909 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:41:42.335693 kubelet[2909]: W1112 17:41:42.335630 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.335855 kubelet[2909]: E1112 17:41:42.335712 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:42.339938 kubelet[2909]: I1112 17:41:42.339683 2909 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:41:42.339938 kubelet[2909]: I1112 17:41:42.339723 2909 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:41:42.339938 kubelet[2909]: I1112 17:41:42.339754 2909 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:42.343749 kubelet[2909]: I1112 17:41:42.343580 2909 policy_none.go:49] "None policy: Start" Nov 12 17:41:42.345340 kubelet[2909]: I1112 17:41:42.345305 2909 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:41:42.345445 kubelet[2909]: I1112 17:41:42.345416 2909 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:41:42.358563 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 17:41:42.376210 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 17:41:42.394079 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 17:41:42.399008 kubelet[2909]: I1112 17:41:42.397582 2909 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:42.399008 kubelet[2909]: I1112 17:41:42.398020 2909 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:41:42.399008 kubelet[2909]: E1112 17:41:42.398142 2909 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.255:6443/api/v1/nodes\": dial tcp 172.31.27.255:6443: connect: connection refused" node="ip-172-31-27-255" Nov 12 17:41:42.399008 kubelet[2909]: I1112 17:41:42.398372 2909 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:41:42.402611 kubelet[2909]: E1112 17:41:42.402547 2909 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-255\" not found" Nov 12 17:41:42.433353 kubelet[2909]: I1112 17:41:42.433293 2909 topology_manager.go:215] "Topology Admit Handler" podUID="f6ecf71268f35ce1f8b50f14dc1e3eac" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-255" Nov 12 17:41:42.435588 kubelet[2909]: I1112 17:41:42.435531 2909 topology_manager.go:215] "Topology Admit Handler" podUID="09dfd1f2cd84dae995137b1dfd3320e7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.438010 kubelet[2909]: I1112 17:41:42.437634 2909 topology_manager.go:215] "Topology Admit Handler" podUID="132967031a93fdd9c3dedc672b2a372f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-255" Nov 12 17:41:42.453549 systemd[1]: Created slice kubepods-burstable-podf6ecf71268f35ce1f8b50f14dc1e3eac.slice - libcontainer container kubepods-burstable-podf6ecf71268f35ce1f8b50f14dc1e3eac.slice. Nov 12 17:41:42.476275 systemd[1]: Created slice kubepods-burstable-pod09dfd1f2cd84dae995137b1dfd3320e7.slice - libcontainer container kubepods-burstable-pod09dfd1f2cd84dae995137b1dfd3320e7.slice. Nov 12 17:41:42.496480 kubelet[2909]: I1112 17:41:42.496440 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.497642 kubelet[2909]: I1112 17:41:42.497157 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.497642 kubelet[2909]: I1112 17:41:42.497269 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/132967031a93fdd9c3dedc672b2a372f-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-255\" (UID: \"132967031a93fdd9c3dedc672b2a372f\") " pod="kube-system/kube-scheduler-ip-172-31-27-255" Nov 12 17:41:42.497642 kubelet[2909]: I1112 17:41:42.497316 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-ca-certs\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:42.497642 kubelet[2909]: I1112 17:41:42.497375 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:42.497642 kubelet[2909]: I1112 17:41:42.497436 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.498073 kubelet[2909]: I1112 17:41:42.497481 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:42.498073 kubelet[2909]: I1112 17:41:42.497530 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.498073 kubelet[2909]: I1112 17:41:42.497577 2909 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:42.499779 kubelet[2909]: E1112 17:41:42.499724 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": dial tcp 172.31.27.255:6443: connect: connection refused" interval="400ms" Nov 12 17:41:42.501467 systemd[1]: Created slice kubepods-burstable-pod132967031a93fdd9c3dedc672b2a372f.slice - libcontainer container kubepods-burstable-pod132967031a93fdd9c3dedc672b2a372f.slice. Nov 12 17:41:42.600599 kubelet[2909]: I1112 17:41:42.600522 2909 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:42.601151 kubelet[2909]: E1112 17:41:42.601120 2909 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.255:6443/api/v1/nodes\": dial tcp 172.31.27.255:6443: connect: connection refused" node="ip-172-31-27-255" Nov 12 17:41:42.770774 containerd[2016]: time="2024-11-12T17:41:42.770625898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-255,Uid:f6ecf71268f35ce1f8b50f14dc1e3eac,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:42.796662 containerd[2016]: time="2024-11-12T17:41:42.796605346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-255,Uid:09dfd1f2cd84dae995137b1dfd3320e7,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:42.807118 containerd[2016]: time="2024-11-12T17:41:42.807033455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-255,Uid:132967031a93fdd9c3dedc672b2a372f,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:42.900392 kubelet[2909]: E1112 17:41:42.900319 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": dial tcp 172.31.27.255:6443: connect: connection refused" interval="800ms" Nov 12 17:41:43.009327 kubelet[2909]: I1112 17:41:43.009270 2909 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:43.010040 kubelet[2909]: E1112 17:41:43.009978 2909 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.255:6443/api/v1/nodes\": dial tcp 172.31.27.255:6443: connect: connection refused" node="ip-172-31-27-255" Nov 12 17:41:43.129243 kubelet[2909]: W1112 17:41:43.129150 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.129243 kubelet[2909]: E1112 17:41:43.129213 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.255:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.238268 kubelet[2909]: W1112 17:41:43.238148 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-255&limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.238268 kubelet[2909]: E1112 17:41:43.238235 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.255:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-255&limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.463264 kubelet[2909]: W1112 17:41:43.463065 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.463264 kubelet[2909]: E1112 17:41:43.463155 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.255:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.673966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174443512.mount: Deactivated successfully. Nov 12 17:41:43.678812 containerd[2016]: time="2024-11-12T17:41:43.678715115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.683625 containerd[2016]: time="2024-11-12T17:41:43.683539187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 12 17:41:43.684891 containerd[2016]: time="2024-11-12T17:41:43.684506855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.686483 containerd[2016]: time="2024-11-12T17:41:43.686266091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.687243 containerd[2016]: time="2024-11-12T17:41:43.687125951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:41:43.688053 containerd[2016]: time="2024-11-12T17:41:43.687949379Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.689895 containerd[2016]: time="2024-11-12T17:41:43.688870727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:41:43.694973 containerd[2016]: time="2024-11-12T17:41:43.694915163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.701354 kubelet[2909]: E1112 17:41:43.701310 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": dial tcp 172.31.27.255:6443: connect: connection refused" interval="1.6s" Nov 12 17:41:43.703908 containerd[2016]: time="2024-11-12T17:41:43.703299287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 932.560901ms" Nov 12 17:41:43.707358 containerd[2016]: time="2024-11-12T17:41:43.707282675Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 910.566245ms" Nov 12 17:41:43.713823 containerd[2016]: time="2024-11-12T17:41:43.713652851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 906.47488ms" Nov 12 17:41:43.813677 kubelet[2909]: I1112 17:41:43.813299 2909 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:43.814799 kubelet[2909]: E1112 17:41:43.814719 2909 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.255:6443/api/v1/nodes\": dial tcp 172.31.27.255:6443: connect: connection refused" node="ip-172-31-27-255" Nov 12 17:41:43.816603 kubelet[2909]: W1112 17:41:43.816506 2909 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.816603 kubelet[2909]: E1112 17:41:43.816569 2909 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.255:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:43.885918 containerd[2016]: time="2024-11-12T17:41:43.884979444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:43.885918 containerd[2016]: time="2024-11-12T17:41:43.885076728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:43.885918 containerd[2016]: time="2024-11-12T17:41:43.885128724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.885918 containerd[2016]: time="2024-11-12T17:41:43.885276336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.900448 containerd[2016]: time="2024-11-12T17:41:43.899708736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:43.900448 containerd[2016]: time="2024-11-12T17:41:43.900124176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:43.900448 containerd[2016]: time="2024-11-12T17:41:43.900303420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.901222 containerd[2016]: time="2024-11-12T17:41:43.901125588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.910081 containerd[2016]: time="2024-11-12T17:41:43.909704256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:43.910081 containerd[2016]: time="2024-11-12T17:41:43.909863040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:43.910081 containerd[2016]: time="2024-11-12T17:41:43.909903048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.911813 containerd[2016]: time="2024-11-12T17:41:43.911384784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:43.936451 systemd[1]: Started cri-containerd-49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604.scope - libcontainer container 49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604. Nov 12 17:41:43.991138 systemd[1]: Started cri-containerd-90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91.scope - libcontainer container 90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91. Nov 12 17:41:44.007448 systemd[1]: Started cri-containerd-06db09b268470b23cf982d53c415fbf7a33b006988ae9585ba6346f28df91fd6.scope - libcontainer container 06db09b268470b23cf982d53c415fbf7a33b006988ae9585ba6346f28df91fd6. Nov 12 17:41:44.075218 containerd[2016]: time="2024-11-12T17:41:44.075136317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-255,Uid:09dfd1f2cd84dae995137b1dfd3320e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604\"" Nov 12 17:41:44.087947 containerd[2016]: time="2024-11-12T17:41:44.087716577Z" level=info msg="CreateContainer within sandbox \"49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:41:44.123396 containerd[2016]: time="2024-11-12T17:41:44.122077305Z" level=info msg="CreateContainer within sandbox \"49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5\"" Nov 12 17:41:44.125152 containerd[2016]: time="2024-11-12T17:41:44.124242117Z" level=info msg="StartContainer for \"bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5\"" Nov 12 17:41:44.127125 containerd[2016]: time="2024-11-12T17:41:44.127059933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-255,Uid:f6ecf71268f35ce1f8b50f14dc1e3eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"06db09b268470b23cf982d53c415fbf7a33b006988ae9585ba6346f28df91fd6\"" Nov 12 17:41:44.137207 containerd[2016]: time="2024-11-12T17:41:44.137140365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-255,Uid:132967031a93fdd9c3dedc672b2a372f,Namespace:kube-system,Attempt:0,} returns sandbox id \"90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91\"" Nov 12 17:41:44.137776 containerd[2016]: time="2024-11-12T17:41:44.137625861Z" level=info msg="CreateContainer within sandbox \"06db09b268470b23cf982d53c415fbf7a33b006988ae9585ba6346f28df91fd6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:41:44.146010 containerd[2016]: time="2024-11-12T17:41:44.145945389Z" level=info msg="CreateContainer within sandbox \"90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:41:44.161629 containerd[2016]: time="2024-11-12T17:41:44.161547597Z" level=info msg="CreateContainer within sandbox \"06db09b268470b23cf982d53c415fbf7a33b006988ae9585ba6346f28df91fd6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"28f11af955baa2bc763b1cb91e80c4f833ff2f59ab72c0ce43d38767d69ba29f\"" Nov 12 17:41:44.162628 containerd[2016]: time="2024-11-12T17:41:44.162394017Z" level=info msg="StartContainer for \"28f11af955baa2bc763b1cb91e80c4f833ff2f59ab72c0ce43d38767d69ba29f\"" Nov 12 17:41:44.175799 containerd[2016]: time="2024-11-12T17:41:44.175724013Z" level=info msg="CreateContainer within sandbox \"90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8\"" Nov 12 17:41:44.176514 containerd[2016]: time="2024-11-12T17:41:44.176447661Z" level=info msg="StartContainer for \"f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8\"" Nov 12 17:41:44.199476 systemd[1]: Started cri-containerd-bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5.scope - libcontainer container bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5. Nov 12 17:41:44.257169 systemd[1]: Started cri-containerd-f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8.scope - libcontainer container f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8. Nov 12 17:41:44.274319 systemd[1]: Started cri-containerd-28f11af955baa2bc763b1cb91e80c4f833ff2f59ab72c0ce43d38767d69ba29f.scope - libcontainer container 28f11af955baa2bc763b1cb91e80c4f833ff2f59ab72c0ce43d38767d69ba29f. Nov 12 17:41:44.277040 kubelet[2909]: E1112 17:41:44.276583 2909 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.255:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.255:6443: connect: connection refused Nov 12 17:41:44.362982 containerd[2016]: time="2024-11-12T17:41:44.362886178Z" level=info msg="StartContainer for \"bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5\" returns successfully" Nov 12 17:41:44.394088 containerd[2016]: time="2024-11-12T17:41:44.393934822Z" level=info msg="StartContainer for \"f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8\" returns successfully" Nov 12 17:41:44.411866 containerd[2016]: time="2024-11-12T17:41:44.410818270Z" level=info msg="StartContainer for \"28f11af955baa2bc763b1cb91e80c4f833ff2f59ab72c0ce43d38767d69ba29f\" returns successfully" Nov 12 17:41:45.263870 update_engine[1998]: I20241112 17:41:45.261878 1998 update_attempter.cc:509] Updating boot flags... Nov 12 17:41:45.423587 kubelet[2909]: I1112 17:41:45.421292 2909 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:45.435892 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3195) Nov 12 17:41:48.918963 kubelet[2909]: I1112 17:41:48.918898 2909 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-255" Nov 12 17:41:49.014278 kubelet[2909]: E1112 17:41:49.014201 2909 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 12 17:41:49.266066 kubelet[2909]: I1112 17:41:49.265898 2909 apiserver.go:52] "Watching apiserver" Nov 12 17:41:49.297144 kubelet[2909]: I1112 17:41:49.297065 2909 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:41:52.104082 systemd[1]: Reloading requested from client PID 3280 ('systemctl') (unit session-7.scope)... Nov 12 17:41:52.104116 systemd[1]: Reloading... Nov 12 17:41:52.388884 zram_generator::config[3323]: No configuration found. Nov 12 17:41:52.661882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:41:52.863188 systemd[1]: Reloading finished in 758 ms. Nov 12 17:41:52.951877 kubelet[2909]: I1112 17:41:52.951666 2909 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:41:52.953037 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:52.960053 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:41:52.961347 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:52.961909 systemd[1]: kubelet.service: Consumed 1.648s CPU time, 112.0M memory peak, 0B memory swap peak. Nov 12 17:41:52.978063 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:53.276239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:53.289481 (kubelet)[3383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:41:53.395654 kubelet[3383]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:53.395654 kubelet[3383]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:41:53.395654 kubelet[3383]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:53.395654 kubelet[3383]: I1112 17:41:53.394232 3383 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:41:53.403305 kubelet[3383]: I1112 17:41:53.403253 3383 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 17:41:53.403511 kubelet[3383]: I1112 17:41:53.403489 3383 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:41:53.404151 kubelet[3383]: I1112 17:41:53.404119 3383 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 17:41:53.407423 kubelet[3383]: I1112 17:41:53.407382 3383 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:41:53.411236 kubelet[3383]: I1112 17:41:53.411057 3383 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:41:53.435694 kubelet[3383]: I1112 17:41:53.435655 3383 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:41:53.436372 kubelet[3383]: I1112 17:41:53.436338 3383 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:41:53.436832 kubelet[3383]: I1112 17:41:53.436796 3383 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:41:53.437354 kubelet[3383]: I1112 17:41:53.437083 3383 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:41:53.437354 kubelet[3383]: I1112 17:41:53.437113 3383 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:41:53.437354 kubelet[3383]: I1112 17:41:53.437173 3383 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:53.437600 kubelet[3383]: I1112 17:41:53.437578 3383 kubelet.go:396] "Attempting to sync node with API server" Nov 12 17:41:53.439173 kubelet[3383]: I1112 17:41:53.439142 3383 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:41:53.439316 kubelet[3383]: I1112 17:41:53.439296 3383 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:41:53.439442 kubelet[3383]: I1112 17:41:53.439422 3383 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:41:53.448789 kubelet[3383]: I1112 17:41:53.448728 3383 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:41:53.450279 kubelet[3383]: I1112 17:41:53.450233 3383 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:41:53.459051 kubelet[3383]: I1112 17:41:53.457136 3383 server.go:1256] "Started kubelet" Nov 12 17:41:53.463145 kubelet[3383]: I1112 17:41:53.463105 3383 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:41:53.471069 kubelet[3383]: I1112 17:41:53.471013 3383 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:41:53.471765 kubelet[3383]: I1112 17:41:53.471720 3383 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:41:53.472902 kubelet[3383]: I1112 17:41:53.472873 3383 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:41:53.477699 kubelet[3383]: I1112 17:41:53.477658 3383 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:41:53.505525 kubelet[3383]: I1112 17:41:53.477985 3383 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 17:41:53.515759 kubelet[3383]: I1112 17:41:53.515706 3383 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:41:53.517913 kubelet[3383]: I1112 17:41:53.500312 3383 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 17:41:53.518053 kubelet[3383]: I1112 17:41:53.504741 3383 server.go:461] "Adding debug handlers to kubelet server" Nov 12 17:41:53.524264 kubelet[3383]: I1112 17:41:53.524229 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:41:53.526445 kubelet[3383]: I1112 17:41:53.526409 3383 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:41:53.527108 kubelet[3383]: I1112 17:41:53.526584 3383 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:41:53.527108 kubelet[3383]: I1112 17:41:53.526621 3383 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 17:41:53.527108 kubelet[3383]: E1112 17:41:53.526698 3383 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:41:53.538991 kubelet[3383]: I1112 17:41:53.538953 3383 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:41:53.541936 kubelet[3383]: I1112 17:41:53.541900 3383 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:41:53.543350 kubelet[3383]: E1112 17:41:53.543291 3383 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:41:53.622933 kubelet[3383]: E1112 17:41:53.622895 3383 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Nov 12 17:41:53.628603 kubelet[3383]: I1112 17:41:53.627745 3383 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-255" Nov 12 17:41:53.631554 kubelet[3383]: E1112 17:41:53.631293 3383 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 17:41:53.647405 kubelet[3383]: I1112 17:41:53.646888 3383 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-255" Nov 12 17:41:53.647405 kubelet[3383]: I1112 17:41:53.647015 3383 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-255" Nov 12 17:41:53.705442 kubelet[3383]: I1112 17:41:53.705408 3383 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:41:53.706268 kubelet[3383]: I1112 17:41:53.705777 3383 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:41:53.706268 kubelet[3383]: I1112 17:41:53.705817 3383 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:53.706268 kubelet[3383]: I1112 17:41:53.706091 3383 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:41:53.706268 kubelet[3383]: I1112 17:41:53.706130 3383 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:41:53.706268 kubelet[3383]: I1112 17:41:53.706146 3383 policy_none.go:49] "None policy: Start" Nov 12 17:41:53.708921 kubelet[3383]: I1112 17:41:53.707987 3383 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:41:53.708921 kubelet[3383]: I1112 17:41:53.708039 3383 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:41:53.708921 kubelet[3383]: I1112 17:41:53.708269 3383 state_mem.go:75] "Updated machine memory state" Nov 12 17:41:53.721558 kubelet[3383]: I1112 17:41:53.720127 3383 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:41:53.724018 kubelet[3383]: I1112 17:41:53.723974 3383 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:41:53.833707 kubelet[3383]: I1112 17:41:53.832363 3383 topology_manager.go:215] "Topology Admit Handler" podUID="f6ecf71268f35ce1f8b50f14dc1e3eac" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-255" Nov 12 17:41:53.833707 kubelet[3383]: I1112 17:41:53.832484 3383 topology_manager.go:215] "Topology Admit Handler" podUID="09dfd1f2cd84dae995137b1dfd3320e7" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.833707 kubelet[3383]: I1112 17:41:53.832574 3383 topology_manager.go:215] "Topology Admit Handler" podUID="132967031a93fdd9c3dedc672b2a372f" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-255" Nov 12 17:41:53.850737 kubelet[3383]: E1112 17:41:53.850652 3383 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-255\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:53.925795 kubelet[3383]: I1112 17:41:53.925277 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.925795 kubelet[3383]: I1112 17:41:53.925349 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-ca-certs\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:53.925795 kubelet[3383]: I1112 17:41:53.925395 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:53.925795 kubelet[3383]: I1112 17:41:53.925444 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.925795 kubelet[3383]: I1112 17:41:53.925490 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.926595 kubelet[3383]: I1112 17:41:53.925534 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6ecf71268f35ce1f8b50f14dc1e3eac-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-255\" (UID: \"f6ecf71268f35ce1f8b50f14dc1e3eac\") " pod="kube-system/kube-apiserver-ip-172-31-27-255" Nov 12 17:41:53.926595 kubelet[3383]: I1112 17:41:53.925580 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.926595 kubelet[3383]: I1112 17:41:53.925634 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/09dfd1f2cd84dae995137b1dfd3320e7-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-255\" (UID: \"09dfd1f2cd84dae995137b1dfd3320e7\") " pod="kube-system/kube-controller-manager-ip-172-31-27-255" Nov 12 17:41:53.926595 kubelet[3383]: I1112 17:41:53.925689 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/132967031a93fdd9c3dedc672b2a372f-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-255\" (UID: \"132967031a93fdd9c3dedc672b2a372f\") " pod="kube-system/kube-scheduler-ip-172-31-27-255" Nov 12 17:41:54.445866 kubelet[3383]: I1112 17:41:54.443386 3383 apiserver.go:52] "Watching apiserver" Nov 12 17:41:54.512854 kubelet[3383]: I1112 17:41:54.512720 3383 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 17:41:54.876641 kubelet[3383]: I1112 17:41:54.876571 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-255" podStartSLOduration=1.876492646 podStartE2EDuration="1.876492646s" podCreationTimestamp="2024-11-12 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:54.816750118 +0000 UTC m=+1.518802556" watchObservedRunningTime="2024-11-12 17:41:54.876492646 +0000 UTC m=+1.578545060" Nov 12 17:41:57.602870 kubelet[3383]: I1112 17:41:57.602761 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-255" podStartSLOduration=4.60268962 podStartE2EDuration="4.60268962s" podCreationTimestamp="2024-11-12 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:54.883305598 +0000 UTC m=+1.585358024" watchObservedRunningTime="2024-11-12 17:41:57.60268962 +0000 UTC m=+4.304742034" Nov 12 17:42:00.150251 sudo[2339]: pam_unix(sudo:session): session closed for user root Nov 12 17:42:00.175295 sshd[2336]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:00.182952 systemd[1]: sshd@6-172.31.27.255:22-139.178.89.65:53558.service: Deactivated successfully. Nov 12 17:42:00.188290 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:42:00.188750 systemd[1]: session-7.scope: Consumed 11.838s CPU time, 185.1M memory peak, 0B memory swap peak. Nov 12 17:42:00.190324 systemd-logind[1996]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:42:00.193161 systemd-logind[1996]: Removed session 7. Nov 12 17:42:04.401559 kubelet[3383]: I1112 17:42:04.401491 3383 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:42:04.403471 containerd[2016]: time="2024-11-12T17:42:04.403381806Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:42:04.404451 kubelet[3383]: I1112 17:42:04.404380 3383 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:42:05.006347 kubelet[3383]: I1112 17:42:05.006273 3383 topology_manager.go:215] "Topology Admit Handler" podUID="5e76779d-51f5-4e1d-b83e-6871e093a6a3" podNamespace="kube-system" podName="kube-proxy-728xt" Nov 12 17:42:05.030713 systemd[1]: Created slice kubepods-besteffort-pod5e76779d_51f5_4e1d_b83e_6871e093a6a3.slice - libcontainer container kubepods-besteffort-pod5e76779d_51f5_4e1d_b83e_6871e093a6a3.slice. Nov 12 17:42:05.096004 kubelet[3383]: I1112 17:42:05.095866 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e76779d-51f5-4e1d-b83e-6871e093a6a3-lib-modules\") pod \"kube-proxy-728xt\" (UID: \"5e76779d-51f5-4e1d-b83e-6871e093a6a3\") " pod="kube-system/kube-proxy-728xt" Nov 12 17:42:05.096004 kubelet[3383]: I1112 17:42:05.095993 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e76779d-51f5-4e1d-b83e-6871e093a6a3-kube-proxy\") pod \"kube-proxy-728xt\" (UID: \"5e76779d-51f5-4e1d-b83e-6871e093a6a3\") " pod="kube-system/kube-proxy-728xt" Nov 12 17:42:05.096359 kubelet[3383]: I1112 17:42:05.096046 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e76779d-51f5-4e1d-b83e-6871e093a6a3-xtables-lock\") pod \"kube-proxy-728xt\" (UID: \"5e76779d-51f5-4e1d-b83e-6871e093a6a3\") " pod="kube-system/kube-proxy-728xt" Nov 12 17:42:05.096359 kubelet[3383]: I1112 17:42:05.096094 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjbv\" (UniqueName: \"kubernetes.io/projected/5e76779d-51f5-4e1d-b83e-6871e093a6a3-kube-api-access-dzjbv\") pod \"kube-proxy-728xt\" (UID: \"5e76779d-51f5-4e1d-b83e-6871e093a6a3\") " pod="kube-system/kube-proxy-728xt" Nov 12 17:42:05.344980 containerd[2016]: time="2024-11-12T17:42:05.344774586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728xt,Uid:5e76779d-51f5-4e1d-b83e-6871e093a6a3,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:05.399826 containerd[2016]: time="2024-11-12T17:42:05.399443011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:05.400303 containerd[2016]: time="2024-11-12T17:42:05.399667915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:05.400303 containerd[2016]: time="2024-11-12T17:42:05.399736963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:05.400457 containerd[2016]: time="2024-11-12T17:42:05.400195111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:05.439107 systemd[1]: run-containerd-runc-k8s.io-03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf-runc.TzN0te.mount: Deactivated successfully. Nov 12 17:42:05.458288 systemd[1]: Started cri-containerd-03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf.scope - libcontainer container 03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf. Nov 12 17:42:05.564435 kubelet[3383]: I1112 17:42:05.564357 3383 topology_manager.go:215] "Topology Admit Handler" podUID="15528bb2-9979-4885-866b-20b7c3442765" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-wk9nw" Nov 12 17:42:05.592273 systemd[1]: Created slice kubepods-besteffort-pod15528bb2_9979_4885_866b_20b7c3442765.slice - libcontainer container kubepods-besteffort-pod15528bb2_9979_4885_866b_20b7c3442765.slice. Nov 12 17:42:05.602105 kubelet[3383]: I1112 17:42:05.600986 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/15528bb2-9979-4885-866b-20b7c3442765-var-lib-calico\") pod \"tigera-operator-56b74f76df-wk9nw\" (UID: \"15528bb2-9979-4885-866b-20b7c3442765\") " pod="tigera-operator/tigera-operator-56b74f76df-wk9nw" Nov 12 17:42:05.602105 kubelet[3383]: I1112 17:42:05.601085 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7bb4\" (UniqueName: \"kubernetes.io/projected/15528bb2-9979-4885-866b-20b7c3442765-kube-api-access-h7bb4\") pod \"tigera-operator-56b74f76df-wk9nw\" (UID: \"15528bb2-9979-4885-866b-20b7c3442765\") " pod="tigera-operator/tigera-operator-56b74f76df-wk9nw" Nov 12 17:42:05.617970 containerd[2016]: time="2024-11-12T17:42:05.617818076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-728xt,Uid:5e76779d-51f5-4e1d-b83e-6871e093a6a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf\"" Nov 12 17:42:05.625188 containerd[2016]: time="2024-11-12T17:42:05.625005992Z" level=info msg="CreateContainer within sandbox \"03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:42:05.645170 containerd[2016]: time="2024-11-12T17:42:05.644988128Z" level=info msg="CreateContainer within sandbox \"03b514da659b3ab9dfcf2522874f57a8ec992387560ecdcd67adc1c062b31cbf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"45397ee6c364a5fe011e78a1e23ace99d71f55df465d20d97099a49edff150e4\"" Nov 12 17:42:05.645952 containerd[2016]: time="2024-11-12T17:42:05.645903452Z" level=info msg="StartContainer for \"45397ee6c364a5fe011e78a1e23ace99d71f55df465d20d97099a49edff150e4\"" Nov 12 17:42:05.693506 systemd[1]: Started cri-containerd-45397ee6c364a5fe011e78a1e23ace99d71f55df465d20d97099a49edff150e4.scope - libcontainer container 45397ee6c364a5fe011e78a1e23ace99d71f55df465d20d97099a49edff150e4. Nov 12 17:42:05.757534 containerd[2016]: time="2024-11-12T17:42:05.757441964Z" level=info msg="StartContainer for \"45397ee6c364a5fe011e78a1e23ace99d71f55df465d20d97099a49edff150e4\" returns successfully" Nov 12 17:42:05.908719 containerd[2016]: time="2024-11-12T17:42:05.908522865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-wk9nw,Uid:15528bb2-9979-4885-866b-20b7c3442765,Namespace:tigera-operator,Attempt:0,}" Nov 12 17:42:05.940972 containerd[2016]: time="2024-11-12T17:42:05.940736949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:05.940972 containerd[2016]: time="2024-11-12T17:42:05.940887909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:05.940972 containerd[2016]: time="2024-11-12T17:42:05.940927833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:05.941510 containerd[2016]: time="2024-11-12T17:42:05.941092773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:05.974160 systemd[1]: Started cri-containerd-17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd.scope - libcontainer container 17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd. Nov 12 17:42:06.048420 containerd[2016]: time="2024-11-12T17:42:06.048344694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-wk9nw,Uid:15528bb2-9979-4885-866b-20b7c3442765,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd\"" Nov 12 17:42:06.052658 containerd[2016]: time="2024-11-12T17:42:06.052496214Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 17:42:07.910078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3141534068.mount: Deactivated successfully. Nov 12 17:42:08.499053 containerd[2016]: time="2024-11-12T17:42:08.498975478Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:08.500720 containerd[2016]: time="2024-11-12T17:42:08.500644582Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123633" Nov 12 17:42:08.502195 containerd[2016]: time="2024-11-12T17:42:08.502123606Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:08.506270 containerd[2016]: time="2024-11-12T17:42:08.506172190Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:08.508193 containerd[2016]: time="2024-11-12T17:42:08.507982882Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.45391072s" Nov 12 17:42:08.508193 containerd[2016]: time="2024-11-12T17:42:08.508052314Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 17:42:08.513090 containerd[2016]: time="2024-11-12T17:42:08.512975110Z" level=info msg="CreateContainer within sandbox \"17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 17:42:08.537790 containerd[2016]: time="2024-11-12T17:42:08.537716590Z" level=info msg="CreateContainer within sandbox \"17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6\"" Nov 12 17:42:08.538468 containerd[2016]: time="2024-11-12T17:42:08.538424074Z" level=info msg="StartContainer for \"c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6\"" Nov 12 17:42:08.586146 systemd[1]: Started cri-containerd-c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6.scope - libcontainer container c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6. Nov 12 17:42:08.629927 containerd[2016]: time="2024-11-12T17:42:08.629822195Z" level=info msg="StartContainer for \"c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6\" returns successfully" Nov 12 17:42:08.730437 kubelet[3383]: I1112 17:42:08.729998 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-728xt" podStartSLOduration=4.729940907 podStartE2EDuration="4.729940907s" podCreationTimestamp="2024-11-12 17:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:06.719988177 +0000 UTC m=+13.422040591" watchObservedRunningTime="2024-11-12 17:42:08.729940907 +0000 UTC m=+15.431993321" Nov 12 17:42:12.867225 kubelet[3383]: I1112 17:42:12.865023 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-wk9nw" podStartSLOduration=5.407743392 podStartE2EDuration="7.864955384s" podCreationTimestamp="2024-11-12 17:42:05 +0000 UTC" firstStartedPulling="2024-11-12 17:42:06.051421338 +0000 UTC m=+12.753473752" lastFinishedPulling="2024-11-12 17:42:08.50863333 +0000 UTC m=+15.210685744" observedRunningTime="2024-11-12 17:42:08.729815303 +0000 UTC m=+15.431867705" watchObservedRunningTime="2024-11-12 17:42:12.864955384 +0000 UTC m=+19.567007810" Nov 12 17:42:12.867225 kubelet[3383]: I1112 17:42:12.865205 3383 topology_manager.go:215] "Topology Admit Handler" podUID="e2a2a255-4480-478c-9c8e-a17b8028d8d3" podNamespace="calico-system" podName="calico-typha-74b987d78d-g9wz9" Nov 12 17:42:12.886364 systemd[1]: Created slice kubepods-besteffort-pode2a2a255_4480_478c_9c8e_a17b8028d8d3.slice - libcontainer container kubepods-besteffort-pode2a2a255_4480_478c_9c8e_a17b8028d8d3.slice. Nov 12 17:42:12.946726 kubelet[3383]: I1112 17:42:12.946601 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e2a2a255-4480-478c-9c8e-a17b8028d8d3-typha-certs\") pod \"calico-typha-74b987d78d-g9wz9\" (UID: \"e2a2a255-4480-478c-9c8e-a17b8028d8d3\") " pod="calico-system/calico-typha-74b987d78d-g9wz9" Nov 12 17:42:12.946726 kubelet[3383]: I1112 17:42:12.946681 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6j2g\" (UniqueName: \"kubernetes.io/projected/e2a2a255-4480-478c-9c8e-a17b8028d8d3-kube-api-access-k6j2g\") pod \"calico-typha-74b987d78d-g9wz9\" (UID: \"e2a2a255-4480-478c-9c8e-a17b8028d8d3\") " pod="calico-system/calico-typha-74b987d78d-g9wz9" Nov 12 17:42:12.946726 kubelet[3383]: I1112 17:42:12.946809 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a2a255-4480-478c-9c8e-a17b8028d8d3-tigera-ca-bundle\") pod \"calico-typha-74b987d78d-g9wz9\" (UID: \"e2a2a255-4480-478c-9c8e-a17b8028d8d3\") " pod="calico-system/calico-typha-74b987d78d-g9wz9" Nov 12 17:42:13.041885 kubelet[3383]: I1112 17:42:13.040663 3383 topology_manager.go:215] "Topology Admit Handler" podUID="6af33925-3931-44cd-ba85-ecab754e6e54" podNamespace="calico-system" podName="calico-node-dgcmz" Nov 12 17:42:13.084884 systemd[1]: Created slice kubepods-besteffort-pod6af33925_3931_44cd_ba85_ecab754e6e54.slice - libcontainer container kubepods-besteffort-pod6af33925_3931_44cd_ba85_ecab754e6e54.slice. Nov 12 17:42:13.149951 kubelet[3383]: I1112 17:42:13.149099 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-cni-net-dir\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.149951 kubelet[3383]: I1112 17:42:13.149346 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af33925-3931-44cd-ba85-ecab754e6e54-tigera-ca-bundle\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.149951 kubelet[3383]: I1112 17:42:13.149445 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6af33925-3931-44cd-ba85-ecab754e6e54-node-certs\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.149951 kubelet[3383]: I1112 17:42:13.149631 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-var-lib-calico\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.149951 kubelet[3383]: I1112 17:42:13.149727 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-flexvol-driver-host\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.150280 kubelet[3383]: I1112 17:42:13.149919 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-lib-modules\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.150280 kubelet[3383]: I1112 17:42:13.150047 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-cni-bin-dir\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.150280 kubelet[3383]: I1112 17:42:13.150195 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-cni-log-dir\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.150435 kubelet[3383]: I1112 17:42:13.150364 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-xtables-lock\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.150501 kubelet[3383]: I1112 17:42:13.150452 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-policysync\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.151877 kubelet[3383]: I1112 17:42:13.150655 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6af33925-3931-44cd-ba85-ecab754e6e54-var-run-calico\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.151877 kubelet[3383]: I1112 17:42:13.150887 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7k9\" (UniqueName: \"kubernetes.io/projected/6af33925-3931-44cd-ba85-ecab754e6e54-kube-api-access-8g7k9\") pod \"calico-node-dgcmz\" (UID: \"6af33925-3931-44cd-ba85-ecab754e6e54\") " pod="calico-system/calico-node-dgcmz" Nov 12 17:42:13.193232 containerd[2016]: time="2024-11-12T17:42:13.193155241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b987d78d-g9wz9,Uid:e2a2a255-4480-478c-9c8e-a17b8028d8d3,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:13.233172 kubelet[3383]: I1112 17:42:13.231817 3383 topology_manager.go:215] "Topology Admit Handler" podUID="135a017b-df90-4603-90c6-655608f495ae" podNamespace="calico-system" podName="csi-node-driver-gjlz6" Nov 12 17:42:13.234736 kubelet[3383]: E1112 17:42:13.234679 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:13.256884 kubelet[3383]: I1112 17:42:13.254182 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/135a017b-df90-4603-90c6-655608f495ae-socket-dir\") pod \"csi-node-driver-gjlz6\" (UID: \"135a017b-df90-4603-90c6-655608f495ae\") " pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:13.256884 kubelet[3383]: I1112 17:42:13.254257 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/135a017b-df90-4603-90c6-655608f495ae-registration-dir\") pod \"csi-node-driver-gjlz6\" (UID: \"135a017b-df90-4603-90c6-655608f495ae\") " pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:13.256884 kubelet[3383]: I1112 17:42:13.254305 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/135a017b-df90-4603-90c6-655608f495ae-kubelet-dir\") pod \"csi-node-driver-gjlz6\" (UID: \"135a017b-df90-4603-90c6-655608f495ae\") " pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:13.256884 kubelet[3383]: I1112 17:42:13.254431 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/135a017b-df90-4603-90c6-655608f495ae-varrun\") pod \"csi-node-driver-gjlz6\" (UID: \"135a017b-df90-4603-90c6-655608f495ae\") " pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:13.256884 kubelet[3383]: I1112 17:42:13.254912 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68jwd\" (UniqueName: \"kubernetes.io/projected/135a017b-df90-4603-90c6-655608f495ae-kube-api-access-68jwd\") pod \"csi-node-driver-gjlz6\" (UID: \"135a017b-df90-4603-90c6-655608f495ae\") " pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:13.264813 kubelet[3383]: E1112 17:42:13.264757 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.264813 kubelet[3383]: W1112 17:42:13.264799 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.265046 kubelet[3383]: E1112 17:42:13.264997 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.277230 kubelet[3383]: E1112 17:42:13.277194 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.278190 kubelet[3383]: W1112 17:42:13.278152 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.279057 kubelet[3383]: E1112 17:42:13.278956 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.296331 containerd[2016]: time="2024-11-12T17:42:13.294808082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:13.296907 containerd[2016]: time="2024-11-12T17:42:13.296215838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:13.296907 containerd[2016]: time="2024-11-12T17:42:13.296453678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:13.299495 containerd[2016]: time="2024-11-12T17:42:13.299378582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:13.314492 kubelet[3383]: E1112 17:42:13.313151 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.314492 kubelet[3383]: W1112 17:42:13.313187 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.314492 kubelet[3383]: E1112 17:42:13.313243 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.361179 kubelet[3383]: E1112 17:42:13.361116 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.361497 kubelet[3383]: W1112 17:42:13.361455 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.362189 kubelet[3383]: E1112 17:42:13.361649 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.362716 kubelet[3383]: E1112 17:42:13.362573 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.362716 kubelet[3383]: W1112 17:42:13.362600 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.362716 kubelet[3383]: E1112 17:42:13.362644 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.365783 kubelet[3383]: E1112 17:42:13.365667 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.365783 kubelet[3383]: W1112 17:42:13.365702 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.365783 kubelet[3383]: E1112 17:42:13.365778 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.367356 kubelet[3383]: E1112 17:42:13.367245 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.367356 kubelet[3383]: W1112 17:42:13.367281 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.367356 kubelet[3383]: E1112 17:42:13.367356 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.367956 kubelet[3383]: E1112 17:42:13.367915 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.367956 kubelet[3383]: W1112 17:42:13.367947 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.369077 kubelet[3383]: E1112 17:42:13.367989 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.371075 kubelet[3383]: E1112 17:42:13.369713 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.371075 kubelet[3383]: W1112 17:42:13.369749 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.371945 kubelet[3383]: E1112 17:42:13.371874 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.373322 kubelet[3383]: E1112 17:42:13.373267 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.373322 kubelet[3383]: W1112 17:42:13.373305 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.375775 kubelet[3383]: E1112 17:42:13.375717 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.375775 kubelet[3383]: W1112 17:42:13.375756 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.376327 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.377912 kubelet[3383]: W1112 17:42:13.376371 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.376776 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.377912 kubelet[3383]: W1112 17:42:13.376795 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.376826 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.377215 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.377912 kubelet[3383]: W1112 17:42:13.377236 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.377264 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.377912 kubelet[3383]: E1112 17:42:13.377618 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.378529 kubelet[3383]: E1112 17:42:13.378224 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.378529 kubelet[3383]: W1112 17:42:13.378250 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.378529 kubelet[3383]: E1112 17:42:13.378285 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.378814 kubelet[3383]: E1112 17:42:13.378720 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.378814 kubelet[3383]: W1112 17:42:13.378807 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.378947 kubelet[3383]: E1112 17:42:13.378901 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.379028 kubelet[3383]: E1112 17:42:13.378949 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.380882 kubelet[3383]: E1112 17:42:13.379808 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.380227 systemd[1]: Started cri-containerd-e87c8ad404184665b063ac28245839b822609ba0f9e1f0e3853ba4b376f93705.scope - libcontainer container e87c8ad404184665b063ac28245839b822609ba0f9e1f0e3853ba4b376f93705. Nov 12 17:42:13.381116 kubelet[3383]: W1112 17:42:13.380907 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.381116 kubelet[3383]: E1112 17:42:13.380962 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.381413 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.383008 kubelet[3383]: W1112 17:42:13.381448 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.381481 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.382186 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.383008 kubelet[3383]: W1112 17:42:13.382206 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.382253 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.382573 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.383008 kubelet[3383]: W1112 17:42:13.382592 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.383008 kubelet[3383]: E1112 17:42:13.382619 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.385883 kubelet[3383]: E1112 17:42:13.384567 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.385883 kubelet[3383]: E1112 17:42:13.384628 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.385883 kubelet[3383]: W1112 17:42:13.384599 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.385883 kubelet[3383]: E1112 17:42:13.384752 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.387353 kubelet[3383]: E1112 17:42:13.387254 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.387353 kubelet[3383]: W1112 17:42:13.387336 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.388137 kubelet[3383]: E1112 17:42:13.387755 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.392439 kubelet[3383]: E1112 17:42:13.392386 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.392439 kubelet[3383]: W1112 17:42:13.392423 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.392651 kubelet[3383]: E1112 17:42:13.392460 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.396285 kubelet[3383]: E1112 17:42:13.396236 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.397429 kubelet[3383]: W1112 17:42:13.397221 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.401973 kubelet[3383]: E1112 17:42:13.400289 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.401973 kubelet[3383]: E1112 17:42:13.401083 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.401973 kubelet[3383]: W1112 17:42:13.401105 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.401973 kubelet[3383]: E1112 17:42:13.401143 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.404636 kubelet[3383]: E1112 17:42:13.404588 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.404761 kubelet[3383]: W1112 17:42:13.404621 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.407000 kubelet[3383]: E1112 17:42:13.406940 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.412168 kubelet[3383]: E1112 17:42:13.412108 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.412168 kubelet[3383]: W1112 17:42:13.412170 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.412890 kubelet[3383]: E1112 17:42:13.412337 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.415615 kubelet[3383]: E1112 17:42:13.415552 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.415615 kubelet[3383]: W1112 17:42:13.415592 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.415815 kubelet[3383]: E1112 17:42:13.415651 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.418871 containerd[2016]: time="2024-11-12T17:42:13.418764291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgcmz,Uid:6af33925-3931-44cd-ba85-ecab754e6e54,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:13.469470 kubelet[3383]: E1112 17:42:13.469409 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:13.469470 kubelet[3383]: W1112 17:42:13.469452 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:13.475095 kubelet[3383]: E1112 17:42:13.469494 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:13.521333 containerd[2016]: time="2024-11-12T17:42:13.520921323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:13.521333 containerd[2016]: time="2024-11-12T17:42:13.521043759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:13.521972 containerd[2016]: time="2024-11-12T17:42:13.521284371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:13.522817 containerd[2016]: time="2024-11-12T17:42:13.522597759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:13.576208 systemd[1]: Started cri-containerd-ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac.scope - libcontainer container ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac. Nov 12 17:42:13.707666 containerd[2016]: time="2024-11-12T17:42:13.707501584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74b987d78d-g9wz9,Uid:e2a2a255-4480-478c-9c8e-a17b8028d8d3,Namespace:calico-system,Attempt:0,} returns sandbox id \"e87c8ad404184665b063ac28245839b822609ba0f9e1f0e3853ba4b376f93705\"" Nov 12 17:42:13.715674 containerd[2016]: time="2024-11-12T17:42:13.715068484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 17:42:13.740344 containerd[2016]: time="2024-11-12T17:42:13.740276476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dgcmz,Uid:6af33925-3931-44cd-ba85-ecab754e6e54,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\"" Nov 12 17:42:14.528000 kubelet[3383]: E1112 17:42:14.527869 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:15.937726 containerd[2016]: time="2024-11-12T17:42:15.937608403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:15.940488 containerd[2016]: time="2024-11-12T17:42:15.939765127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 17:42:15.942518 containerd[2016]: time="2024-11-12T17:42:15.941868403Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:15.951092 containerd[2016]: time="2024-11-12T17:42:15.951029791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:15.955141 containerd[2016]: time="2024-11-12T17:42:15.955082779Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 2.239896527s" Nov 12 17:42:15.955343 containerd[2016]: time="2024-11-12T17:42:15.955313479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 17:42:15.957162 containerd[2016]: time="2024-11-12T17:42:15.957101047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 17:42:15.984297 containerd[2016]: time="2024-11-12T17:42:15.984109111Z" level=info msg="CreateContainer within sandbox \"e87c8ad404184665b063ac28245839b822609ba0f9e1f0e3853ba4b376f93705\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 17:42:16.010202 containerd[2016]: time="2024-11-12T17:42:16.010143219Z" level=info msg="CreateContainer within sandbox \"e87c8ad404184665b063ac28245839b822609ba0f9e1f0e3853ba4b376f93705\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ab633a44efa282fa14f01082a8a69b6d4c784afb75fd1811e3a66ef463a6c992\"" Nov 12 17:42:16.013107 containerd[2016]: time="2024-11-12T17:42:16.012965067Z" level=info msg="StartContainer for \"ab633a44efa282fa14f01082a8a69b6d4c784afb75fd1811e3a66ef463a6c992\"" Nov 12 17:42:16.063179 systemd[1]: Started cri-containerd-ab633a44efa282fa14f01082a8a69b6d4c784afb75fd1811e3a66ef463a6c992.scope - libcontainer container ab633a44efa282fa14f01082a8a69b6d4c784afb75fd1811e3a66ef463a6c992. Nov 12 17:42:16.136074 containerd[2016]: time="2024-11-12T17:42:16.135911824Z" level=info msg="StartContainer for \"ab633a44efa282fa14f01082a8a69b6d4c784afb75fd1811e3a66ef463a6c992\" returns successfully" Nov 12 17:42:16.527595 kubelet[3383]: E1112 17:42:16.527476 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:16.775344 kubelet[3383]: E1112 17:42:16.775289 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.775344 kubelet[3383]: W1112 17:42:16.775328 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.776090 kubelet[3383]: E1112 17:42:16.775366 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.776090 kubelet[3383]: E1112 17:42:16.775783 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.776090 kubelet[3383]: W1112 17:42:16.775804 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.776090 kubelet[3383]: E1112 17:42:16.775872 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.776304 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.777446 kubelet[3383]: W1112 17:42:16.776324 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.776350 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.776712 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.777446 kubelet[3383]: W1112 17:42:16.776731 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.776770 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.777143 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.777446 kubelet[3383]: W1112 17:42:16.777164 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.777446 kubelet[3383]: E1112 17:42:16.777191 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.777516 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.780083 kubelet[3383]: W1112 17:42:16.777535 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.777559 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.777876 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.780083 kubelet[3383]: W1112 17:42:16.777892 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.777915 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.778226 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.780083 kubelet[3383]: W1112 17:42:16.778267 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.778298 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.780083 kubelet[3383]: E1112 17:42:16.778650 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.780809 kubelet[3383]: W1112 17:42:16.778668 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.780809 kubelet[3383]: E1112 17:42:16.778748 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.780809 kubelet[3383]: E1112 17:42:16.779462 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.780809 kubelet[3383]: W1112 17:42:16.779530 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.780809 kubelet[3383]: E1112 17:42:16.779562 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.783172 kubelet[3383]: E1112 17:42:16.781339 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.783172 kubelet[3383]: W1112 17:42:16.781364 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.783172 kubelet[3383]: E1112 17:42:16.781397 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.783172 kubelet[3383]: E1112 17:42:16.782653 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.783172 kubelet[3383]: W1112 17:42:16.782680 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.783172 kubelet[3383]: E1112 17:42:16.782713 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.783547 kubelet[3383]: E1112 17:42:16.783444 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.783547 kubelet[3383]: W1112 17:42:16.783465 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.783547 kubelet[3383]: E1112 17:42:16.783494 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.784177 kubelet[3383]: E1112 17:42:16.784120 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.784177 kubelet[3383]: W1112 17:42:16.784156 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.784552 kubelet[3383]: E1112 17:42:16.784188 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.784680 kubelet[3383]: E1112 17:42:16.784618 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.784680 kubelet[3383]: W1112 17:42:16.784639 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.784680 kubelet[3383]: E1112 17:42:16.784664 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.811287 kubelet[3383]: E1112 17:42:16.811233 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.811287 kubelet[3383]: W1112 17:42:16.811290 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.811499 kubelet[3383]: E1112 17:42:16.811327 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.811972 kubelet[3383]: E1112 17:42:16.811930 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.811972 kubelet[3383]: W1112 17:42:16.811960 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.812128 kubelet[3383]: E1112 17:42:16.811998 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.812548 kubelet[3383]: E1112 17:42:16.812519 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.812548 kubelet[3383]: W1112 17:42:16.812546 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.812688 kubelet[3383]: E1112 17:42:16.812594 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.813036 kubelet[3383]: E1112 17:42:16.813006 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.813128 kubelet[3383]: W1112 17:42:16.813053 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.813128 kubelet[3383]: E1112 17:42:16.813095 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.813468 kubelet[3383]: E1112 17:42:16.813440 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.813543 kubelet[3383]: W1112 17:42:16.813465 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.813543 kubelet[3383]: E1112 17:42:16.813532 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.813967 kubelet[3383]: E1112 17:42:16.813936 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.813967 kubelet[3383]: W1112 17:42:16.813964 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.814105 kubelet[3383]: E1112 17:42:16.814009 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.814416 kubelet[3383]: E1112 17:42:16.814389 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.814479 kubelet[3383]: W1112 17:42:16.814414 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.814665 kubelet[3383]: E1112 17:42:16.814542 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.814735 kubelet[3383]: E1112 17:42:16.814705 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.814735 kubelet[3383]: W1112 17:42:16.814720 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.814998 kubelet[3383]: E1112 17:42:16.814909 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.815087 kubelet[3383]: E1112 17:42:16.815030 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.815087 kubelet[3383]: W1112 17:42:16.815046 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.815378 kubelet[3383]: E1112 17:42:16.815233 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.815378 kubelet[3383]: E1112 17:42:16.815368 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.815483 kubelet[3383]: W1112 17:42:16.815383 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.815483 kubelet[3383]: E1112 17:42:16.815462 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.816304 kubelet[3383]: E1112 17:42:16.816225 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.816304 kubelet[3383]: W1112 17:42:16.816254 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.816304 kubelet[3383]: E1112 17:42:16.816303 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.816754 kubelet[3383]: E1112 17:42:16.816718 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.816754 kubelet[3383]: W1112 17:42:16.816752 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.816992 kubelet[3383]: E1112 17:42:16.816924 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.817327 kubelet[3383]: E1112 17:42:16.817301 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.817397 kubelet[3383]: W1112 17:42:16.817326 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.817455 kubelet[3383]: E1112 17:42:16.817439 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.817770 kubelet[3383]: E1112 17:42:16.817744 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.817910 kubelet[3383]: W1112 17:42:16.817770 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.817910 kubelet[3383]: E1112 17:42:16.817907 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.818257 kubelet[3383]: E1112 17:42:16.818231 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.818328 kubelet[3383]: W1112 17:42:16.818256 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.818328 kubelet[3383]: E1112 17:42:16.818289 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.818823 kubelet[3383]: E1112 17:42:16.818796 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.818823 kubelet[3383]: W1112 17:42:16.818821 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.819120 kubelet[3383]: E1112 17:42:16.819000 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.819202 kubelet[3383]: E1112 17:42:16.819166 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.819202 kubelet[3383]: W1112 17:42:16.819182 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.819295 kubelet[3383]: E1112 17:42:16.819205 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:16.820900 kubelet[3383]: E1112 17:42:16.820612 3383 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:16.820900 kubelet[3383]: W1112 17:42:16.820664 3383 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:16.820900 kubelet[3383]: E1112 17:42:16.820700 3383 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:17.207897 containerd[2016]: time="2024-11-12T17:42:17.207745937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:17.209935 containerd[2016]: time="2024-11-12T17:42:17.209791193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 17:42:17.211283 containerd[2016]: time="2024-11-12T17:42:17.211034945Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:17.218676 containerd[2016]: time="2024-11-12T17:42:17.218405477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:17.220874 containerd[2016]: time="2024-11-12T17:42:17.220708577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.263542586s" Nov 12 17:42:17.220874 containerd[2016]: time="2024-11-12T17:42:17.220777865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 17:42:17.225477 containerd[2016]: time="2024-11-12T17:42:17.225191633Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 17:42:17.250370 containerd[2016]: time="2024-11-12T17:42:17.250302090Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc\"" Nov 12 17:42:17.253183 containerd[2016]: time="2024-11-12T17:42:17.252467886Z" level=info msg="StartContainer for \"fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc\"" Nov 12 17:42:17.331146 systemd[1]: Started cri-containerd-fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc.scope - libcontainer container fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc. Nov 12 17:42:17.391345 containerd[2016]: time="2024-11-12T17:42:17.391235346Z" level=info msg="StartContainer for \"fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc\" returns successfully" Nov 12 17:42:17.427972 systemd[1]: cri-containerd-fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc.scope: Deactivated successfully. Nov 12 17:42:17.761603 kubelet[3383]: I1112 17:42:17.761144 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:17.776975 containerd[2016]: time="2024-11-12T17:42:17.776820908Z" level=info msg="shim disconnected" id=fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc namespace=k8s.io Nov 12 17:42:17.776975 containerd[2016]: time="2024-11-12T17:42:17.776963840Z" level=warning msg="cleaning up after shim disconnected" id=fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc namespace=k8s.io Nov 12 17:42:17.777283 containerd[2016]: time="2024-11-12T17:42:17.776985968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:42:17.797952 kubelet[3383]: I1112 17:42:17.796982 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74b987d78d-g9wz9" podStartSLOduration=3.554765705 podStartE2EDuration="5.796925312s" podCreationTimestamp="2024-11-12 17:42:12 +0000 UTC" firstStartedPulling="2024-11-12 17:42:13.714113632 +0000 UTC m=+20.416166034" lastFinishedPulling="2024-11-12 17:42:15.956273179 +0000 UTC m=+22.658325641" observedRunningTime="2024-11-12 17:42:16.782093887 +0000 UTC m=+23.484146301" watchObservedRunningTime="2024-11-12 17:42:17.796925312 +0000 UTC m=+24.498977714" Nov 12 17:42:17.967359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa72a384fc7e269e98e624cbc41b2a644fc4f51f9fbcd5a9bf1ff8fdc4cc11dc-rootfs.mount: Deactivated successfully. Nov 12 17:42:18.527677 kubelet[3383]: E1112 17:42:18.527547 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:18.767976 containerd[2016]: time="2024-11-12T17:42:18.767906265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 17:42:20.527627 kubelet[3383]: E1112 17:42:20.527502 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:22.527246 kubelet[3383]: E1112 17:42:22.527173 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:22.532819 containerd[2016]: time="2024-11-12T17:42:22.532295844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:22.535280 containerd[2016]: time="2024-11-12T17:42:22.534894480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 17:42:22.535889 containerd[2016]: time="2024-11-12T17:42:22.535760664Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:22.545400 containerd[2016]: time="2024-11-12T17:42:22.545334588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:22.551589 containerd[2016]: time="2024-11-12T17:42:22.551533788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 3.783509431s" Nov 12 17:42:22.552268 containerd[2016]: time="2024-11-12T17:42:22.551819856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 17:42:22.558818 containerd[2016]: time="2024-11-12T17:42:22.558751104Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 17:42:22.586506 containerd[2016]: time="2024-11-12T17:42:22.586442052Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76\"" Nov 12 17:42:22.588878 containerd[2016]: time="2024-11-12T17:42:22.587280312Z" level=info msg="StartContainer for \"d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76\"" Nov 12 17:42:22.644604 systemd[1]: Started cri-containerd-d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76.scope - libcontainer container d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76. Nov 12 17:42:22.700353 containerd[2016]: time="2024-11-12T17:42:22.699009301Z" level=info msg="StartContainer for \"d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76\" returns successfully" Nov 12 17:42:23.722108 containerd[2016]: time="2024-11-12T17:42:23.722039162Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:42:23.728709 systemd[1]: cri-containerd-d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76.scope: Deactivated successfully. Nov 12 17:42:23.777074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76-rootfs.mount: Deactivated successfully. Nov 12 17:42:23.810459 kubelet[3383]: I1112 17:42:23.810409 3383 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:42:23.869068 kubelet[3383]: I1112 17:42:23.869007 3383 topology_manager.go:215] "Topology Admit Handler" podUID="4c15ea44-79b4-4078-bf3c-e9244ce9a92c" podNamespace="kube-system" podName="coredns-76f75df574-rk7cj" Nov 12 17:42:23.913277 kubelet[3383]: I1112 17:42:23.913152 3383 topology_manager.go:215] "Topology Admit Handler" podUID="86515ae8-4b89-4d0b-9abb-d58cd726eec7" podNamespace="kube-system" podName="coredns-76f75df574-mh7mm" Nov 12 17:42:23.929649 kubelet[3383]: I1112 17:42:23.926892 3383 topology_manager.go:215] "Topology Admit Handler" podUID="26ead776-ee2c-425e-9c0d-03fa419f3738" podNamespace="calico-system" podName="calico-kube-controllers-847b7f5879-lngzn" Nov 12 17:42:23.929649 kubelet[3383]: I1112 17:42:23.927174 3383 topology_manager.go:215] "Topology Admit Handler" podUID="835e0a4e-90ef-4bd2-addb-1fda5f27cb7a" podNamespace="calico-apiserver" podName="calico-apiserver-5bf9fb5585-5crpq" Nov 12 17:42:23.929649 kubelet[3383]: I1112 17:42:23.927385 3383 topology_manager.go:215] "Topology Admit Handler" podUID="ee626534-e22b-45d6-8fe0-7a3b6b222819" podNamespace="calico-apiserver" podName="calico-apiserver-5bf9fb5585-tpnkb" Nov 12 17:42:23.941534 systemd[1]: Created slice kubepods-burstable-pod4c15ea44_79b4_4078_bf3c_e9244ce9a92c.slice - libcontainer container kubepods-burstable-pod4c15ea44_79b4_4078_bf3c_e9244ce9a92c.slice. Nov 12 17:42:23.966478 systemd[1]: Created slice kubepods-burstable-pod86515ae8_4b89_4d0b_9abb_d58cd726eec7.slice - libcontainer container kubepods-burstable-pod86515ae8_4b89_4d0b_9abb_d58cd726eec7.slice. Nov 12 17:42:23.978806 kubelet[3383]: I1112 17:42:23.977336 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26ead776-ee2c-425e-9c0d-03fa419f3738-tigera-ca-bundle\") pod \"calico-kube-controllers-847b7f5879-lngzn\" (UID: \"26ead776-ee2c-425e-9c0d-03fa419f3738\") " pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" Nov 12 17:42:23.978806 kubelet[3383]: I1112 17:42:23.977410 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb82m\" (UniqueName: \"kubernetes.io/projected/ee626534-e22b-45d6-8fe0-7a3b6b222819-kube-api-access-sb82m\") pod \"calico-apiserver-5bf9fb5585-tpnkb\" (UID: \"ee626534-e22b-45d6-8fe0-7a3b6b222819\") " pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" Nov 12 17:42:23.978806 kubelet[3383]: I1112 17:42:23.977469 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh9hc\" (UniqueName: \"kubernetes.io/projected/26ead776-ee2c-425e-9c0d-03fa419f3738-kube-api-access-wh9hc\") pod \"calico-kube-controllers-847b7f5879-lngzn\" (UID: \"26ead776-ee2c-425e-9c0d-03fa419f3738\") " pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" Nov 12 17:42:23.978806 kubelet[3383]: I1112 17:42:23.977520 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47h6p\" (UniqueName: \"kubernetes.io/projected/4c15ea44-79b4-4078-bf3c-e9244ce9a92c-kube-api-access-47h6p\") pod \"coredns-76f75df574-rk7cj\" (UID: \"4c15ea44-79b4-4078-bf3c-e9244ce9a92c\") " pod="kube-system/coredns-76f75df574-rk7cj" Nov 12 17:42:23.978806 kubelet[3383]: I1112 17:42:23.977566 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c15ea44-79b4-4078-bf3c-e9244ce9a92c-config-volume\") pod \"coredns-76f75df574-rk7cj\" (UID: \"4c15ea44-79b4-4078-bf3c-e9244ce9a92c\") " pod="kube-system/coredns-76f75df574-rk7cj" Nov 12 17:42:23.980368 kubelet[3383]: I1112 17:42:23.977612 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ee626534-e22b-45d6-8fe0-7a3b6b222819-calico-apiserver-certs\") pod \"calico-apiserver-5bf9fb5585-tpnkb\" (UID: \"ee626534-e22b-45d6-8fe0-7a3b6b222819\") " pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" Nov 12 17:42:23.980368 kubelet[3383]: I1112 17:42:23.977659 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw846\" (UniqueName: \"kubernetes.io/projected/835e0a4e-90ef-4bd2-addb-1fda5f27cb7a-kube-api-access-qw846\") pod \"calico-apiserver-5bf9fb5585-5crpq\" (UID: \"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a\") " pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" Nov 12 17:42:23.980368 kubelet[3383]: I1112 17:42:23.977712 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86515ae8-4b89-4d0b-9abb-d58cd726eec7-config-volume\") pod \"coredns-76f75df574-mh7mm\" (UID: \"86515ae8-4b89-4d0b-9abb-d58cd726eec7\") " pod="kube-system/coredns-76f75df574-mh7mm" Nov 12 17:42:23.980368 kubelet[3383]: I1112 17:42:23.977757 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qls8\" (UniqueName: \"kubernetes.io/projected/86515ae8-4b89-4d0b-9abb-d58cd726eec7-kube-api-access-5qls8\") pod \"coredns-76f75df574-mh7mm\" (UID: \"86515ae8-4b89-4d0b-9abb-d58cd726eec7\") " pod="kube-system/coredns-76f75df574-mh7mm" Nov 12 17:42:23.980368 kubelet[3383]: I1112 17:42:23.977857 3383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/835e0a4e-90ef-4bd2-addb-1fda5f27cb7a-calico-apiserver-certs\") pod \"calico-apiserver-5bf9fb5585-5crpq\" (UID: \"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a\") " pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" Nov 12 17:42:23.987095 systemd[1]: Created slice kubepods-besteffort-pod26ead776_ee2c_425e_9c0d_03fa419f3738.slice - libcontainer container kubepods-besteffort-pod26ead776_ee2c_425e_9c0d_03fa419f3738.slice. Nov 12 17:42:24.006144 systemd[1]: Created slice kubepods-besteffort-pod835e0a4e_90ef_4bd2_addb_1fda5f27cb7a.slice - libcontainer container kubepods-besteffort-pod835e0a4e_90ef_4bd2_addb_1fda5f27cb7a.slice. Nov 12 17:42:24.026690 systemd[1]: Created slice kubepods-besteffort-podee626534_e22b_45d6_8fe0_7a3b6b222819.slice - libcontainer container kubepods-besteffort-podee626534_e22b_45d6_8fe0_7a3b6b222819.slice. Nov 12 17:42:24.253666 containerd[2016]: time="2024-11-12T17:42:24.253533336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rk7cj,Uid:4c15ea44-79b4-4078-bf3c-e9244ce9a92c,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:24.276993 containerd[2016]: time="2024-11-12T17:42:24.276898860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh7mm,Uid:86515ae8-4b89-4d0b-9abb-d58cd726eec7,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:24.299432 containerd[2016]: time="2024-11-12T17:42:24.299348353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b7f5879-lngzn,Uid:26ead776-ee2c-425e-9c0d-03fa419f3738,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:24.317873 containerd[2016]: time="2024-11-12T17:42:24.317688805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-5crpq,Uid:835e0a4e-90ef-4bd2-addb-1fda5f27cb7a,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:42:24.336336 containerd[2016]: time="2024-11-12T17:42:24.336190957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-tpnkb,Uid:ee626534-e22b-45d6-8fe0-7a3b6b222819,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:42:24.539330 systemd[1]: Created slice kubepods-besteffort-pod135a017b_df90_4603_90c6_655608f495ae.slice - libcontainer container kubepods-besteffort-pod135a017b_df90_4603_90c6_655608f495ae.slice. Nov 12 17:42:24.544414 containerd[2016]: time="2024-11-12T17:42:24.544264166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjlz6,Uid:135a017b-df90-4603-90c6-655608f495ae,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:24.565478 containerd[2016]: time="2024-11-12T17:42:24.565139990Z" level=info msg="shim disconnected" id=d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76 namespace=k8s.io Nov 12 17:42:24.565478 containerd[2016]: time="2024-11-12T17:42:24.565213238Z" level=warning msg="cleaning up after shim disconnected" id=d3df36696333e91cef4d5cdc9c6ee42118eb842b311ad0c016a502ae5d54eb76 namespace=k8s.io Nov 12 17:42:24.565478 containerd[2016]: time="2024-11-12T17:42:24.565233218Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:42:24.857035 containerd[2016]: time="2024-11-12T17:42:24.846913719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 17:42:24.959294 containerd[2016]: time="2024-11-12T17:42:24.959101600Z" level=error msg="Failed to destroy network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:24.965786 containerd[2016]: time="2024-11-12T17:42:24.963100000Z" level=error msg="encountered an error cleaning up failed sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:24.965786 containerd[2016]: time="2024-11-12T17:42:24.963224440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b7f5879-lngzn,Uid:26ead776-ee2c-425e-9c0d-03fa419f3738,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:24.966097 kubelet[3383]: E1112 17:42:24.963518 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:24.966097 kubelet[3383]: E1112 17:42:24.963601 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" Nov 12 17:42:24.966097 kubelet[3383]: E1112 17:42:24.963641 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" Nov 12 17:42:24.966723 kubelet[3383]: E1112 17:42:24.963752 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-847b7f5879-lngzn_calico-system(26ead776-ee2c-425e-9c0d-03fa419f3738)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-847b7f5879-lngzn_calico-system(26ead776-ee2c-425e-9c0d-03fa419f3738)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" podUID="26ead776-ee2c-425e-9c0d-03fa419f3738" Nov 12 17:42:24.968491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696-shm.mount: Deactivated successfully. Nov 12 17:42:25.002397 containerd[2016]: time="2024-11-12T17:42:25.001648152Z" level=error msg="Failed to destroy network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.006926 containerd[2016]: time="2024-11-12T17:42:25.006673788Z" level=error msg="encountered an error cleaning up failed sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.007949 containerd[2016]: time="2024-11-12T17:42:25.006790008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rk7cj,Uid:4c15ea44-79b4-4078-bf3c-e9244ce9a92c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.009926 kubelet[3383]: E1112 17:42:25.008351 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.009926 kubelet[3383]: E1112 17:42:25.008441 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rk7cj" Nov 12 17:42:25.009926 kubelet[3383]: E1112 17:42:25.008483 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-rk7cj" Nov 12 17:42:25.010300 kubelet[3383]: E1112 17:42:25.008573 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-rk7cj_kube-system(4c15ea44-79b4-4078-bf3c-e9244ce9a92c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-rk7cj_kube-system(4c15ea44-79b4-4078-bf3c-e9244ce9a92c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rk7cj" podUID="4c15ea44-79b4-4078-bf3c-e9244ce9a92c" Nov 12 17:42:25.018618 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6-shm.mount: Deactivated successfully. Nov 12 17:42:25.070733 containerd[2016]: time="2024-11-12T17:42:25.070324668Z" level=error msg="Failed to destroy network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.073196 containerd[2016]: time="2024-11-12T17:42:25.072965256Z" level=error msg="encountered an error cleaning up failed sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.073613 containerd[2016]: time="2024-11-12T17:42:25.073453140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjlz6,Uid:135a017b-df90-4603-90c6-655608f495ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.074544 kubelet[3383]: E1112 17:42:25.074049 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.074544 kubelet[3383]: E1112 17:42:25.074129 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:25.074544 kubelet[3383]: E1112 17:42:25.074167 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gjlz6" Nov 12 17:42:25.074781 kubelet[3383]: E1112 17:42:25.074251 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gjlz6_calico-system(135a017b-df90-4603-90c6-655608f495ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gjlz6_calico-system(135a017b-df90-4603-90c6-655608f495ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:25.076423 containerd[2016]: time="2024-11-12T17:42:25.076129860Z" level=error msg="Failed to destroy network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.078300 containerd[2016]: time="2024-11-12T17:42:25.077979396Z" level=error msg="encountered an error cleaning up failed sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.078300 containerd[2016]: time="2024-11-12T17:42:25.078225240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-5crpq,Uid:835e0a4e-90ef-4bd2-addb-1fda5f27cb7a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.079290 kubelet[3383]: E1112 17:42:25.079028 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.079290 kubelet[3383]: E1112 17:42:25.079107 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" Nov 12 17:42:25.079290 kubelet[3383]: E1112 17:42:25.079145 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" Nov 12 17:42:25.079528 kubelet[3383]: E1112 17:42:25.079245 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf9fb5585-5crpq_calico-apiserver(835e0a4e-90ef-4bd2-addb-1fda5f27cb7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf9fb5585-5crpq_calico-apiserver(835e0a4e-90ef-4bd2-addb-1fda5f27cb7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" podUID="835e0a4e-90ef-4bd2-addb-1fda5f27cb7a" Nov 12 17:42:25.081949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4-shm.mount: Deactivated successfully. Nov 12 17:42:25.091386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4-shm.mount: Deactivated successfully. Nov 12 17:42:25.092745 containerd[2016]: time="2024-11-12T17:42:25.092059657Z" level=error msg="Failed to destroy network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.094544 containerd[2016]: time="2024-11-12T17:42:25.094430665Z" level=error msg="encountered an error cleaning up failed sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.094708 containerd[2016]: time="2024-11-12T17:42:25.094582429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-tpnkb,Uid:ee626534-e22b-45d6-8fe0-7a3b6b222819,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.096419 kubelet[3383]: E1112 17:42:25.096136 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.096419 kubelet[3383]: E1112 17:42:25.096222 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" Nov 12 17:42:25.096419 kubelet[3383]: E1112 17:42:25.096284 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" Nov 12 17:42:25.097056 kubelet[3383]: E1112 17:42:25.096377 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bf9fb5585-tpnkb_calico-apiserver(ee626534-e22b-45d6-8fe0-7a3b6b222819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bf9fb5585-tpnkb_calico-apiserver(ee626534-e22b-45d6-8fe0-7a3b6b222819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" podUID="ee626534-e22b-45d6-8fe0-7a3b6b222819" Nov 12 17:42:25.098363 containerd[2016]: time="2024-11-12T17:42:25.098027545Z" level=error msg="Failed to destroy network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.099554 containerd[2016]: time="2024-11-12T17:42:25.098874925Z" level=error msg="encountered an error cleaning up failed sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.099554 containerd[2016]: time="2024-11-12T17:42:25.098975077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh7mm,Uid:86515ae8-4b89-4d0b-9abb-d58cd726eec7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.099921 kubelet[3383]: E1112 17:42:25.099282 3383 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:25.099921 kubelet[3383]: E1112 17:42:25.099352 3383 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mh7mm" Nov 12 17:42:25.099921 kubelet[3383]: E1112 17:42:25.099392 3383 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-mh7mm" Nov 12 17:42:25.100183 kubelet[3383]: E1112 17:42:25.099486 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-mh7mm_kube-system(86515ae8-4b89-4d0b-9abb-d58cd726eec7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-mh7mm_kube-system(86515ae8-4b89-4d0b-9abb-d58cd726eec7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mh7mm" podUID="86515ae8-4b89-4d0b-9abb-d58cd726eec7" Nov 12 17:42:25.775814 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e-shm.mount: Deactivated successfully. Nov 12 17:42:25.776117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb-shm.mount: Deactivated successfully. Nov 12 17:42:25.837830 kubelet[3383]: I1112 17:42:25.837391 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:25.839648 containerd[2016]: time="2024-11-12T17:42:25.839344144Z" level=info msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" Nov 12 17:42:25.841419 containerd[2016]: time="2024-11-12T17:42:25.841266676Z" level=info msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" Nov 12 17:42:25.841485 kubelet[3383]: I1112 17:42:25.840290 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:25.841949 containerd[2016]: time="2024-11-12T17:42:25.841659172Z" level=info msg="Ensure that sandbox d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb in task-service has been cleanup successfully" Nov 12 17:42:25.842340 containerd[2016]: time="2024-11-12T17:42:25.842179240Z" level=info msg="Ensure that sandbox dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4 in task-service has been cleanup successfully" Nov 12 17:42:25.850220 kubelet[3383]: I1112 17:42:25.850153 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:25.851755 containerd[2016]: time="2024-11-12T17:42:25.851446696Z" level=info msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" Nov 12 17:42:25.851755 containerd[2016]: time="2024-11-12T17:42:25.851742016Z" level=info msg="Ensure that sandbox b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6 in task-service has been cleanup successfully" Nov 12 17:42:25.861146 kubelet[3383]: I1112 17:42:25.859823 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:25.863134 containerd[2016]: time="2024-11-12T17:42:25.862769404Z" level=info msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" Nov 12 17:42:25.866254 containerd[2016]: time="2024-11-12T17:42:25.865801636Z" level=info msg="Ensure that sandbox 959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e in task-service has been cleanup successfully" Nov 12 17:42:25.874231 kubelet[3383]: I1112 17:42:25.874173 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:25.877322 containerd[2016]: time="2024-11-12T17:42:25.877251112Z" level=info msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" Nov 12 17:42:25.877786 containerd[2016]: time="2024-11-12T17:42:25.877563736Z" level=info msg="Ensure that sandbox 240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4 in task-service has been cleanup successfully" Nov 12 17:42:25.890102 kubelet[3383]: I1112 17:42:25.890047 3383 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:25.892172 containerd[2016]: time="2024-11-12T17:42:25.892103681Z" level=info msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" Nov 12 17:42:25.892450 containerd[2016]: time="2024-11-12T17:42:25.892402289Z" level=info msg="Ensure that sandbox 10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696 in task-service has been cleanup successfully" Nov 12 17:42:26.022391 containerd[2016]: time="2024-11-12T17:42:26.022306609Z" level=error msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" failed" error="failed to destroy network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.023870 kubelet[3383]: E1112 17:42:26.023057 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:26.023870 kubelet[3383]: E1112 17:42:26.023237 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb"} Nov 12 17:42:26.023870 kubelet[3383]: E1112 17:42:26.023302 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee626534-e22b-45d6-8fe0-7a3b6b222819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.023870 kubelet[3383]: E1112 17:42:26.023357 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee626534-e22b-45d6-8fe0-7a3b6b222819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" podUID="ee626534-e22b-45d6-8fe0-7a3b6b222819" Nov 12 17:42:26.046071 containerd[2016]: time="2024-11-12T17:42:26.045989617Z" level=error msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" failed" error="failed to destroy network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.046495 kubelet[3383]: E1112 17:42:26.046460 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:26.046745 kubelet[3383]: E1112 17:42:26.046683 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e"} Nov 12 17:42:26.047015 kubelet[3383]: E1112 17:42:26.046990 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86515ae8-4b89-4d0b-9abb-d58cd726eec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.047500 kubelet[3383]: E1112 17:42:26.047448 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86515ae8-4b89-4d0b-9abb-d58cd726eec7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-mh7mm" podUID="86515ae8-4b89-4d0b-9abb-d58cd726eec7" Nov 12 17:42:26.077096 containerd[2016]: time="2024-11-12T17:42:26.077013385Z" level=error msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" failed" error="failed to destroy network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.080016 kubelet[3383]: E1112 17:42:26.079964 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:26.080164 kubelet[3383]: E1112 17:42:26.080037 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6"} Nov 12 17:42:26.080164 kubelet[3383]: E1112 17:42:26.080124 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c15ea44-79b4-4078-bf3c-e9244ce9a92c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.080412 kubelet[3383]: E1112 17:42:26.080176 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c15ea44-79b4-4078-bf3c-e9244ce9a92c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-rk7cj" podUID="4c15ea44-79b4-4078-bf3c-e9244ce9a92c" Nov 12 17:42:26.091890 containerd[2016]: time="2024-11-12T17:42:26.090365713Z" level=error msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" failed" error="failed to destroy network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.091890 containerd[2016]: time="2024-11-12T17:42:26.090892165Z" level=error msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" failed" error="failed to destroy network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.092499 kubelet[3383]: E1112 17:42:26.092283 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:26.092499 kubelet[3383]: E1112 17:42:26.092304 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:26.092499 kubelet[3383]: E1112 17:42:26.092381 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4"} Nov 12 17:42:26.092499 kubelet[3383]: E1112 17:42:26.092358 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4"} Nov 12 17:42:26.092499 kubelet[3383]: E1112 17:42:26.092471 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.093001 kubelet[3383]: E1112 17:42:26.092524 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" podUID="835e0a4e-90ef-4bd2-addb-1fda5f27cb7a" Nov 12 17:42:26.093001 kubelet[3383]: E1112 17:42:26.092472 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"135a017b-df90-4603-90c6-655608f495ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.093001 kubelet[3383]: E1112 17:42:26.092592 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"135a017b-df90-4603-90c6-655608f495ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gjlz6" podUID="135a017b-df90-4603-90c6-655608f495ae" Nov 12 17:42:26.097545 containerd[2016]: time="2024-11-12T17:42:26.097055006Z" level=error msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" failed" error="failed to destroy network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:26.097786 kubelet[3383]: E1112 17:42:26.097707 3383 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:26.097786 kubelet[3383]: E1112 17:42:26.097767 3383 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696"} Nov 12 17:42:26.098152 kubelet[3383]: E1112 17:42:26.097897 3383 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26ead776-ee2c-425e-9c0d-03fa419f3738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:26.098152 kubelet[3383]: E1112 17:42:26.097959 3383 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26ead776-ee2c-425e-9c0d-03fa419f3738\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" podUID="26ead776-ee2c-425e-9c0d-03fa419f3738" Nov 12 17:42:30.787079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2626062044.mount: Deactivated successfully. Nov 12 17:42:30.849570 containerd[2016]: time="2024-11-12T17:42:30.849480429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:30.851995 containerd[2016]: time="2024-11-12T17:42:30.851585949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 17:42:30.854879 containerd[2016]: time="2024-11-12T17:42:30.853348521Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:30.857519 containerd[2016]: time="2024-11-12T17:42:30.857464749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:30.859732 containerd[2016]: time="2024-11-12T17:42:30.859638237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 6.011634018s" Nov 12 17:42:30.859915 containerd[2016]: time="2024-11-12T17:42:30.859736925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 17:42:30.899113 containerd[2016]: time="2024-11-12T17:42:30.899037405Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 17:42:30.925339 containerd[2016]: time="2024-11-12T17:42:30.925172134Z" level=info msg="CreateContainer within sandbox \"ecdf5d7e778f9e6db1a92c8b67f5fcac0c9eafc4b3976c2ef6d8aad6b0ffbbac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"155254df73516e188a1120c526fb5dc087caa75eb248343f148514d42be821b6\"" Nov 12 17:42:30.927741 containerd[2016]: time="2024-11-12T17:42:30.926732542Z" level=info msg="StartContainer for \"155254df73516e188a1120c526fb5dc087caa75eb248343f148514d42be821b6\"" Nov 12 17:42:30.979150 systemd[1]: Started cri-containerd-155254df73516e188a1120c526fb5dc087caa75eb248343f148514d42be821b6.scope - libcontainer container 155254df73516e188a1120c526fb5dc087caa75eb248343f148514d42be821b6. Nov 12 17:42:31.046939 containerd[2016]: time="2024-11-12T17:42:31.046617606Z" level=info msg="StartContainer for \"155254df73516e188a1120c526fb5dc087caa75eb248343f148514d42be821b6\" returns successfully" Nov 12 17:42:31.200195 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 17:42:31.200346 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 17:42:32.956801 systemd[1]: Started sshd@7-172.31.27.255:22-139.178.89.65:39748.service - OpenSSH per-connection server daemon (139.178.89.65:39748). Nov 12 17:42:33.218195 sshd[4576]: Accepted publickey for core from 139.178.89.65 port 39748 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:33.224110 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:33.255352 systemd-logind[1996]: New session 8 of user core. Nov 12 17:42:33.277337 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:42:33.696255 sshd[4576]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:33.709937 systemd[1]: sshd@7-172.31.27.255:22-139.178.89.65:39748.service: Deactivated successfully. Nov 12 17:42:33.722111 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:42:33.729269 systemd-logind[1996]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:42:33.735718 systemd-logind[1996]: Removed session 8. Nov 12 17:42:33.928898 kernel: bpftool[4661]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 17:42:34.304627 systemd-networkd[1919]: vxlan.calico: Link UP Nov 12 17:42:34.304644 systemd-networkd[1919]: vxlan.calico: Gained carrier Nov 12 17:42:34.315159 (udev-worker)[4682]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:34.351434 (udev-worker)[4681]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:35.524180 systemd-networkd[1919]: vxlan.calico: Gained IPv6LL Nov 12 17:42:37.529381 containerd[2016]: time="2024-11-12T17:42:37.528816218Z" level=info msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" Nov 12 17:42:37.664978 kubelet[3383]: I1112 17:42:37.664912 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-dgcmz" podStartSLOduration=7.547455538 podStartE2EDuration="24.664775499s" podCreationTimestamp="2024-11-12 17:42:13 +0000 UTC" firstStartedPulling="2024-11-12 17:42:13.743017348 +0000 UTC m=+20.445069750" lastFinishedPulling="2024-11-12 17:42:30.860337321 +0000 UTC m=+37.562389711" observedRunningTime="2024-11-12 17:42:31.977335127 +0000 UTC m=+38.679387541" watchObservedRunningTime="2024-11-12 17:42:37.664775499 +0000 UTC m=+44.366827913" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.659 [INFO][4751] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.660 [INFO][4751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" iface="eth0" netns="/var/run/netns/cni-7f0b3e08-a914-4804-44af-bb781b3c38eb" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.663 [INFO][4751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" iface="eth0" netns="/var/run/netns/cni-7f0b3e08-a914-4804-44af-bb781b3c38eb" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.665 [INFO][4751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" iface="eth0" netns="/var/run/netns/cni-7f0b3e08-a914-4804-44af-bb781b3c38eb" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.665 [INFO][4751] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.665 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.717 [INFO][4757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.717 [INFO][4757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.717 [INFO][4757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.735 [WARNING][4757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.735 [INFO][4757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.740 [INFO][4757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:37.748143 containerd[2016]: 2024-11-12 17:42:37.745 [INFO][4751] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:37.753359 containerd[2016]: time="2024-11-12T17:42:37.750068595Z" level=info msg="TearDown network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" successfully" Nov 12 17:42:37.753359 containerd[2016]: time="2024-11-12T17:42:37.750144423Z" level=info msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" returns successfully" Nov 12 17:42:37.753359 containerd[2016]: time="2024-11-12T17:42:37.752725563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-5crpq,Uid:835e0a4e-90ef-4bd2-addb-1fda5f27cb7a,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:42:37.759747 systemd[1]: run-netns-cni\x2d7f0b3e08\x2da914\x2d4804\x2d44af\x2dbb781b3c38eb.mount: Deactivated successfully. Nov 12 17:42:37.769604 ntpd[1989]: Listen normally on 8 vxlan.calico 192.168.50.64:123 Nov 12 17:42:37.770341 ntpd[1989]: 12 Nov 17:42:37 ntpd[1989]: Listen normally on 8 vxlan.calico 192.168.50.64:123 Nov 12 17:42:37.770341 ntpd[1989]: 12 Nov 17:42:37 ntpd[1989]: Listen normally on 9 vxlan.calico [fe80::6401:27ff:fee9:83da%4]:123 Nov 12 17:42:37.769725 ntpd[1989]: Listen normally on 9 vxlan.calico [fe80::6401:27ff:fee9:83da%4]:123 Nov 12 17:42:38.030480 systemd-networkd[1919]: cali31188fe555b: Link UP Nov 12 17:42:38.031783 systemd-networkd[1919]: cali31188fe555b: Gained carrier Nov 12 17:42:38.040078 (udev-worker)[4783]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.892 [INFO][4764] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0 calico-apiserver-5bf9fb5585- calico-apiserver 835e0a4e-90ef-4bd2-addb-1fda5f27cb7a 836 0 2024-11-12 17:42:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf9fb5585 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-255 calico-apiserver-5bf9fb5585-5crpq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31188fe555b [] []}} ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.892 [INFO][4764] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.953 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" HandleID="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.973 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" HandleID="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ba6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-255", "pod":"calico-apiserver-5bf9fb5585-5crpq", "timestamp":"2024-11-12 17:42:37.953409304 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.973 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.973 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.973 [INFO][4775] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.976 [INFO][4775] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.982 [INFO][4775] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.989 [INFO][4775] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.992 [INFO][4775] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.996 [INFO][4775] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.996 [INFO][4775] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:37.998 [INFO][4775] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4 Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:38.005 [INFO][4775] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:38.019 [INFO][4775] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.65/26] block=192.168.50.64/26 handle="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:38.019 [INFO][4775] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.65/26] handle="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" host="ip-172-31-27-255" Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:38.019 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:38.067050 containerd[2016]: 2024-11-12 17:42:38.019 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.65/26] IPv6=[] ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" HandleID="k8s-pod-network.b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.023 [INFO][4764] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"calico-apiserver-5bf9fb5585-5crpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31188fe555b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.024 [INFO][4764] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.65/32] ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.024 [INFO][4764] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31188fe555b ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.032 [INFO][4764] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.033 [INFO][4764] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4", Pod:"calico-apiserver-5bf9fb5585-5crpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31188fe555b", MAC:"a6:03:43:4a:ed:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:38.075244 containerd[2016]: 2024-11-12 17:42:38.059 [INFO][4764] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-5crpq" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:38.166654 containerd[2016]: time="2024-11-12T17:42:38.166137841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:38.167298 containerd[2016]: time="2024-11-12T17:42:38.166510249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:38.167524 containerd[2016]: time="2024-11-12T17:42:38.166936501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:38.168678 containerd[2016]: time="2024-11-12T17:42:38.168384877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:38.225148 systemd[1]: Started cri-containerd-b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4.scope - libcontainer container b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4. Nov 12 17:42:38.305926 containerd[2016]: time="2024-11-12T17:42:38.305714534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-5crpq,Uid:835e0a4e-90ef-4bd2-addb-1fda5f27cb7a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4\"" Nov 12 17:42:38.309660 containerd[2016]: time="2024-11-12T17:42:38.309454526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:42:38.528899 containerd[2016]: time="2024-11-12T17:42:38.528458715Z" level=info msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.614 [INFO][4851] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.615 [INFO][4851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" iface="eth0" netns="/var/run/netns/cni-ef4673f8-8367-4d01-f4f2-216c3b77bc14" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.618 [INFO][4851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" iface="eth0" netns="/var/run/netns/cni-ef4673f8-8367-4d01-f4f2-216c3b77bc14" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.618 [INFO][4851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" iface="eth0" netns="/var/run/netns/cni-ef4673f8-8367-4d01-f4f2-216c3b77bc14" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.618 [INFO][4851] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.618 [INFO][4851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.663 [INFO][4857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.663 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.663 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.680 [WARNING][4857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.681 [INFO][4857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.684 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:38.690374 containerd[2016]: 2024-11-12 17:42:38.687 [INFO][4851] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:38.693087 containerd[2016]: time="2024-11-12T17:42:38.692606308Z" level=info msg="TearDown network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" successfully" Nov 12 17:42:38.693087 containerd[2016]: time="2024-11-12T17:42:38.692801044Z" level=info msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" returns successfully" Nov 12 17:42:38.695967 containerd[2016]: time="2024-11-12T17:42:38.695413912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rk7cj,Uid:4c15ea44-79b4-4078-bf3c-e9244ce9a92c,Namespace:kube-system,Attempt:1,}" Nov 12 17:42:38.736397 systemd[1]: Started sshd@8-172.31.27.255:22-139.178.89.65:57448.service - OpenSSH per-connection server daemon (139.178.89.65:57448). Nov 12 17:42:38.769448 systemd[1]: run-containerd-runc-k8s.io-b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4-runc.YSu5Io.mount: Deactivated successfully. Nov 12 17:42:38.769645 systemd[1]: run-netns-cni\x2def4673f8\x2d8367\x2d4d01\x2df4f2\x2d216c3b77bc14.mount: Deactivated successfully. Nov 12 17:42:38.949376 sshd[4866]: Accepted publickey for core from 139.178.89.65 port 57448 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:38.960263 sshd[4866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:38.979448 systemd-logind[1996]: New session 9 of user core. Nov 12 17:42:38.990939 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:42:38.996248 systemd-networkd[1919]: cali8ff7367f0aa: Link UP Nov 12 17:42:38.996763 systemd-networkd[1919]: cali8ff7367f0aa: Gained carrier Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.823 [INFO][4864] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0 coredns-76f75df574- kube-system 4c15ea44-79b4-4078-bf3c-e9244ce9a92c 850 0 2024-11-12 17:42:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-255 coredns-76f75df574-rk7cj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8ff7367f0aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.823 [INFO][4864] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.883 [INFO][4877] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" HandleID="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.909 [INFO][4877] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" HandleID="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316d20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-255", "pod":"coredns-76f75df574-rk7cj", "timestamp":"2024-11-12 17:42:38.883085141 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.909 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.909 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.909 [INFO][4877] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.914 [INFO][4877] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.926 [INFO][4877] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.933 [INFO][4877] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.936 [INFO][4877] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.942 [INFO][4877] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.942 [INFO][4877] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.946 [INFO][4877] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.962 [INFO][4877] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.972 [INFO][4877] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.66/26] block=192.168.50.64/26 handle="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.973 [INFO][4877] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.66/26] handle="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" host="ip-172-31-27-255" Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.973 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:39.033524 containerd[2016]: 2024-11-12 17:42:38.973 [INFO][4877] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.66/26] IPv6=[] ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" HandleID="k8s-pod-network.0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:38.978 [INFO][4864] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4c15ea44-79b4-4078-bf3c-e9244ce9a92c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"coredns-76f75df574-rk7cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff7367f0aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:38.979 [INFO][4864] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.66/32] ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:38.979 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ff7367f0aa ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:38.987 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:38.988 [INFO][4864] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4c15ea44-79b4-4078-bf3c-e9244ce9a92c", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e", Pod:"coredns-76f75df574-rk7cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff7367f0aa", MAC:"d6:b5:b8:3e:ca:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:39.034660 containerd[2016]: 2024-11-12 17:42:39.014 [INFO][4864] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e" Namespace="kube-system" Pod="coredns-76f75df574-rk7cj" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:39.091373 containerd[2016]: time="2024-11-12T17:42:39.090401726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:39.091373 containerd[2016]: time="2024-11-12T17:42:39.090511646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:39.091373 containerd[2016]: time="2024-11-12T17:42:39.090549962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:39.091373 containerd[2016]: time="2024-11-12T17:42:39.090811526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:39.166955 systemd[1]: Started cri-containerd-0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e.scope - libcontainer container 0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e. Nov 12 17:42:39.277828 containerd[2016]: time="2024-11-12T17:42:39.277743783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-rk7cj,Uid:4c15ea44-79b4-4078-bf3c-e9244ce9a92c,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e\"" Nov 12 17:42:39.289287 containerd[2016]: time="2024-11-12T17:42:39.288710871Z" level=info msg="CreateContainer within sandbox \"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:42:39.319078 containerd[2016]: time="2024-11-12T17:42:39.319005387Z" level=info msg="CreateContainer within sandbox \"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca381a30c3b23202d723e90515c6d74bbd96578a4199c472d130ea0029c583e4\"" Nov 12 17:42:39.326243 containerd[2016]: time="2024-11-12T17:42:39.324121863Z" level=info msg="StartContainer for \"ca381a30c3b23202d723e90515c6d74bbd96578a4199c472d130ea0029c583e4\"" Nov 12 17:42:39.332934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1446009173.mount: Deactivated successfully. Nov 12 17:42:39.388932 sshd[4866]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:39.390172 systemd[1]: Started cri-containerd-ca381a30c3b23202d723e90515c6d74bbd96578a4199c472d130ea0029c583e4.scope - libcontainer container ca381a30c3b23202d723e90515c6d74bbd96578a4199c472d130ea0029c583e4. Nov 12 17:42:39.398511 systemd[1]: sshd@8-172.31.27.255:22-139.178.89.65:57448.service: Deactivated successfully. Nov 12 17:42:39.406138 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:42:39.409468 systemd-logind[1996]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:42:39.415604 systemd-logind[1996]: Removed session 9. Nov 12 17:42:39.454516 containerd[2016]: time="2024-11-12T17:42:39.454435792Z" level=info msg="StartContainer for \"ca381a30c3b23202d723e90515c6d74bbd96578a4199c472d130ea0029c583e4\" returns successfully" Nov 12 17:42:39.532412 containerd[2016]: time="2024-11-12T17:42:39.532252240Z" level=info msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.643 [INFO][5003] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.645 [INFO][5003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" iface="eth0" netns="/var/run/netns/cni-fc7310ed-d9e0-773b-dcb2-9be48a394f67" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.646 [INFO][5003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" iface="eth0" netns="/var/run/netns/cni-fc7310ed-d9e0-773b-dcb2-9be48a394f67" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.647 [INFO][5003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" iface="eth0" netns="/var/run/netns/cni-fc7310ed-d9e0-773b-dcb2-9be48a394f67" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.647 [INFO][5003] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.647 [INFO][5003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.694 [INFO][5009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.694 [INFO][5009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.694 [INFO][5009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.707 [WARNING][5009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.707 [INFO][5009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.712 [INFO][5009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:39.718220 containerd[2016]: 2024-11-12 17:42:39.714 [INFO][5003] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:39.720754 containerd[2016]: time="2024-11-12T17:42:39.718454189Z" level=info msg="TearDown network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" successfully" Nov 12 17:42:39.720754 containerd[2016]: time="2024-11-12T17:42:39.718493525Z" level=info msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" returns successfully" Nov 12 17:42:39.720754 containerd[2016]: time="2024-11-12T17:42:39.719315549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b7f5879-lngzn,Uid:26ead776-ee2c-425e-9c0d-03fa419f3738,Namespace:calico-system,Attempt:1,}" Nov 12 17:42:39.767260 systemd[1]: run-netns-cni\x2dfc7310ed\x2dd9e0\x2d773b\x2ddcb2\x2d9be48a394f67.mount: Deactivated successfully. Nov 12 17:42:39.941307 systemd-networkd[1919]: cali31188fe555b: Gained IPv6LL Nov 12 17:42:39.952728 systemd-networkd[1919]: cali1ef2d131237: Link UP Nov 12 17:42:39.954191 systemd-networkd[1919]: cali1ef2d131237: Gained carrier Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.826 [INFO][5018] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0 calico-kube-controllers-847b7f5879- calico-system 26ead776-ee2c-425e-9c0d-03fa419f3738 860 0 2024-11-12 17:42:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:847b7f5879 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-255 calico-kube-controllers-847b7f5879-lngzn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1ef2d131237 [] []}} ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.826 [INFO][5018] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.877 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" HandleID="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.893 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" HandleID="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004df10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-255", "pod":"calico-kube-controllers-847b7f5879-lngzn", "timestamp":"2024-11-12 17:42:39.877129482 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.893 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.893 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.894 [INFO][5032] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.896 [INFO][5032] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.903 [INFO][5032] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.913 [INFO][5032] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.916 [INFO][5032] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.920 [INFO][5032] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.920 [INFO][5032] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.923 [INFO][5032] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.930 [INFO][5032] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.939 [INFO][5032] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.67/26] block=192.168.50.64/26 handle="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.939 [INFO][5032] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.67/26] handle="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" host="ip-172-31-27-255" Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.939 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:39.993326 containerd[2016]: 2024-11-12 17:42:39.939 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.67/26] IPv6=[] ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" HandleID="k8s-pod-network.a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.945 [INFO][5018] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0", GenerateName:"calico-kube-controllers-847b7f5879-", Namespace:"calico-system", SelfLink:"", UID:"26ead776-ee2c-425e-9c0d-03fa419f3738", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b7f5879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"calico-kube-controllers-847b7f5879-lngzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ef2d131237", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.945 [INFO][5018] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.67/32] ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.945 [INFO][5018] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1ef2d131237 ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.954 [INFO][5018] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.955 [INFO][5018] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0", GenerateName:"calico-kube-controllers-847b7f5879-", Namespace:"calico-system", SelfLink:"", UID:"26ead776-ee2c-425e-9c0d-03fa419f3738", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b7f5879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c", Pod:"calico-kube-controllers-847b7f5879-lngzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ef2d131237", MAC:"fa:0d:2a:ff:a9:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:40.003122 containerd[2016]: 2024-11-12 17:42:39.983 [INFO][5018] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c" Namespace="calico-system" Pod="calico-kube-controllers-847b7f5879-lngzn" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:40.080407 containerd[2016]: time="2024-11-12T17:42:40.080006163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:40.080407 containerd[2016]: time="2024-11-12T17:42:40.080137959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:40.080407 containerd[2016]: time="2024-11-12T17:42:40.080207703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:40.085912 containerd[2016]: time="2024-11-12T17:42:40.083321487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:40.089697 kubelet[3383]: I1112 17:42:40.089605 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-rk7cj" podStartSLOduration=35.089532651 podStartE2EDuration="35.089532651s" podCreationTimestamp="2024-11-12 17:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:40.035075847 +0000 UTC m=+46.737128249" watchObservedRunningTime="2024-11-12 17:42:40.089532651 +0000 UTC m=+46.791585089" Nov 12 17:42:40.176237 systemd[1]: Started cri-containerd-a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c.scope - libcontainer container a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c. Nov 12 17:42:40.317777 containerd[2016]: time="2024-11-12T17:42:40.317699992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-847b7f5879-lngzn,Uid:26ead776-ee2c-425e-9c0d-03fa419f3738,Namespace:calico-system,Attempt:1,} returns sandbox id \"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c\"" Nov 12 17:42:40.529873 containerd[2016]: time="2024-11-12T17:42:40.529492781Z" level=info msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" Nov 12 17:42:40.530389 containerd[2016]: time="2024-11-12T17:42:40.530193713Z" level=info msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" Nov 12 17:42:40.535300 containerd[2016]: time="2024-11-12T17:42:40.534393413Z" level=info msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" Nov 12 17:42:40.773290 systemd-networkd[1919]: cali8ff7367f0aa: Gained IPv6LL Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.720 [INFO][5129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.720 [INFO][5129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" iface="eth0" netns="/var/run/netns/cni-bb02bac1-94d3-a3f3-4820-a47917cdcc57" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.721 [INFO][5129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" iface="eth0" netns="/var/run/netns/cni-bb02bac1-94d3-a3f3-4820-a47917cdcc57" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.721 [INFO][5129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" iface="eth0" netns="/var/run/netns/cni-bb02bac1-94d3-a3f3-4820-a47917cdcc57" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.721 [INFO][5129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.721 [INFO][5129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.826 [INFO][5155] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.828 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.828 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.845 [WARNING][5155] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.845 [INFO][5155] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.848 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:40.856635 containerd[2016]: 2024-11-12 17:42:40.853 [INFO][5129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:40.865910 containerd[2016]: time="2024-11-12T17:42:40.860981023Z" level=info msg="TearDown network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" successfully" Nov 12 17:42:40.865910 containerd[2016]: time="2024-11-12T17:42:40.861050659Z" level=info msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" returns successfully" Nov 12 17:42:40.865497 systemd[1]: run-netns-cni\x2dbb02bac1\x2d94d3\x2da3f3\x2d4820\x2da47917cdcc57.mount: Deactivated successfully. Nov 12 17:42:40.867889 containerd[2016]: time="2024-11-12T17:42:40.866678107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh7mm,Uid:86515ae8-4b89-4d0b-9abb-d58cd726eec7,Namespace:kube-system,Attempt:1,}" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.710 [INFO][5137] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.711 [INFO][5137] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" iface="eth0" netns="/var/run/netns/cni-8c25e027-5144-1b8c-950c-d07d181648e8" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.712 [INFO][5137] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" iface="eth0" netns="/var/run/netns/cni-8c25e027-5144-1b8c-950c-d07d181648e8" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.713 [INFO][5137] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" iface="eth0" netns="/var/run/netns/cni-8c25e027-5144-1b8c-950c-d07d181648e8" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.713 [INFO][5137] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.714 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.836 [INFO][5154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.840 [INFO][5154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.848 [INFO][5154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.874 [WARNING][5154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.874 [INFO][5154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.878 [INFO][5154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:40.896064 containerd[2016]: 2024-11-12 17:42:40.887 [INFO][5137] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:40.899894 containerd[2016]: time="2024-11-12T17:42:40.897742603Z" level=info msg="TearDown network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" successfully" Nov 12 17:42:40.899894 containerd[2016]: time="2024-11-12T17:42:40.897810451Z" level=info msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" returns successfully" Nov 12 17:42:40.904003 containerd[2016]: time="2024-11-12T17:42:40.900489991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjlz6,Uid:135a017b-df90-4603-90c6-655608f495ae,Namespace:calico-system,Attempt:1,}" Nov 12 17:42:40.920291 systemd[1]: run-netns-cni\x2d8c25e027\x2d5144\x2d1b8c\x2d950c\x2dd07d181648e8.mount: Deactivated successfully. Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.751 [INFO][5141] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.751 [INFO][5141] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" iface="eth0" netns="/var/run/netns/cni-467c4cba-4189-7e8f-3cb7-677e25ab5a65" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.752 [INFO][5141] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" iface="eth0" netns="/var/run/netns/cni-467c4cba-4189-7e8f-3cb7-677e25ab5a65" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.754 [INFO][5141] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" iface="eth0" netns="/var/run/netns/cni-467c4cba-4189-7e8f-3cb7-677e25ab5a65" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.755 [INFO][5141] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.755 [INFO][5141] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.845 [INFO][5162] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.845 [INFO][5162] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.879 [INFO][5162] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.922 [WARNING][5162] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.922 [INFO][5162] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.926 [INFO][5162] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:40.939970 containerd[2016]: 2024-11-12 17:42:40.931 [INFO][5141] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:40.965166 containerd[2016]: time="2024-11-12T17:42:40.962150371Z" level=info msg="TearDown network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" successfully" Nov 12 17:42:40.965166 containerd[2016]: time="2024-11-12T17:42:40.962211103Z" level=info msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" returns successfully" Nov 12 17:42:40.965166 containerd[2016]: time="2024-11-12T17:42:40.964225579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-tpnkb,Uid:ee626534-e22b-45d6-8fe0-7a3b6b222819,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:42:40.967572 systemd[1]: run-netns-cni\x2d467c4cba\x2d4189\x2d7e8f\x2d3cb7\x2d677e25ab5a65.mount: Deactivated successfully. Nov 12 17:42:41.385798 systemd-networkd[1919]: cali014fa38eb28: Link UP Nov 12 17:42:41.388388 systemd-networkd[1919]: cali014fa38eb28: Gained carrier Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.095 [INFO][5173] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0 coredns-76f75df574- kube-system 86515ae8-4b89-4d0b-9abb-d58cd726eec7 881 0 2024-11-12 17:42:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-255 coredns-76f75df574-mh7mm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali014fa38eb28 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.096 [INFO][5173] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.250 [INFO][5210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" HandleID="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.296 [INFO][5210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" HandleID="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a7a60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-255", "pod":"coredns-76f75df574-mh7mm", "timestamp":"2024-11-12 17:42:41.250930829 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.296 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.296 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.296 [INFO][5210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.302 [INFO][5210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.321 [INFO][5210] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.335 [INFO][5210] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.339 [INFO][5210] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.344 [INFO][5210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.344 [INFO][5210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.347 [INFO][5210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.357 [INFO][5210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.369 [INFO][5210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.68/26] block=192.168.50.64/26 handle="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.369 [INFO][5210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.68/26] handle="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" host="ip-172-31-27-255" Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.369 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:41.428554 containerd[2016]: 2024-11-12 17:42:41.370 [INFO][5210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.68/26] IPv6=[] ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" HandleID="k8s-pod-network.a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.374 [INFO][5173] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86515ae8-4b89-4d0b-9abb-d58cd726eec7", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"coredns-76f75df574-mh7mm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014fa38eb28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.374 [INFO][5173] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.68/32] ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.374 [INFO][5173] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali014fa38eb28 ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.389 [INFO][5173] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.391 [INFO][5173] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86515ae8-4b89-4d0b-9abb-d58cd726eec7", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c", Pod:"coredns-76f75df574-mh7mm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014fa38eb28", MAC:"1a:03:25:d4:f4:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.429776 containerd[2016]: 2024-11-12 17:42:41.419 [INFO][5173] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c" Namespace="kube-system" Pod="coredns-76f75df574-mh7mm" WorkloadEndpoint="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:41.503498 systemd-networkd[1919]: cali398af73566b: Link UP Nov 12 17:42:41.513546 systemd-networkd[1919]: cali398af73566b: Gained carrier Nov 12 17:42:41.514317 containerd[2016]: time="2024-11-12T17:42:41.509167482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:41.514317 containerd[2016]: time="2024-11-12T17:42:41.509271678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:41.514317 containerd[2016]: time="2024-11-12T17:42:41.509322438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.514317 containerd[2016]: time="2024-11-12T17:42:41.509527770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.581433 systemd[1]: Started cri-containerd-a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c.scope - libcontainer container a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c. Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.143 [INFO][5183] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0 csi-node-driver- calico-system 135a017b-df90-4603-90c6-655608f495ae 880 0 2024-11-12 17:42:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-27-255 csi-node-driver-gjlz6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali398af73566b [] []}} ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.143 [INFO][5183] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.277 [INFO][5214] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" HandleID="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.324 [INFO][5214] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" HandleID="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a5b30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-255", "pod":"csi-node-driver-gjlz6", "timestamp":"2024-11-12 17:42:41.277639253 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.324 [INFO][5214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.370 [INFO][5214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.370 [INFO][5214] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.374 [INFO][5214] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.395 [INFO][5214] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.407 [INFO][5214] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.418 [INFO][5214] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.433 [INFO][5214] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.434 [INFO][5214] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.442 [INFO][5214] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.452 [INFO][5214] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.472 [INFO][5214] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.69/26] block=192.168.50.64/26 handle="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.472 [INFO][5214] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.69/26] handle="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" host="ip-172-31-27-255" Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.472 [INFO][5214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:41.589493 containerd[2016]: 2024-11-12 17:42:41.472 [INFO][5214] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.69/26] IPv6=[] ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" HandleID="k8s-pod-network.d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.481 [INFO][5183] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135a017b-df90-4603-90c6-655608f495ae", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"csi-node-driver-gjlz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali398af73566b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.481 [INFO][5183] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.69/32] ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.482 [INFO][5183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali398af73566b ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.522 [INFO][5183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.530 [INFO][5183] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135a017b-df90-4603-90c6-655608f495ae", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd", Pod:"csi-node-driver-gjlz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali398af73566b", MAC:"92:e0:a8:39:29:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.590761 containerd[2016]: 2024-11-12 17:42:41.566 [INFO][5183] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd" Namespace="calico-system" Pod="csi-node-driver-gjlz6" WorkloadEndpoint="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:41.680026 systemd-networkd[1919]: cali5af3291f75b: Link UP Nov 12 17:42:41.683441 containerd[2016]: time="2024-11-12T17:42:41.683038279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:41.683798 systemd-networkd[1919]: cali5af3291f75b: Gained carrier Nov 12 17:42:41.689366 containerd[2016]: time="2024-11-12T17:42:41.684009331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:41.689366 containerd[2016]: time="2024-11-12T17:42:41.686629591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.689366 containerd[2016]: time="2024-11-12T17:42:41.686888539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.152 [INFO][5190] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0 calico-apiserver-5bf9fb5585- calico-apiserver ee626534-e22b-45d6-8fe0-7a3b6b222819 882 0 2024-11-12 17:42:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bf9fb5585 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-255 calico-apiserver-5bf9fb5585-tpnkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5af3291f75b [] []}} ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.152 [INFO][5190] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.305 [INFO][5215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" HandleID="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.335 [INFO][5215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" HandleID="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-255", "pod":"calico-apiserver-5bf9fb5585-tpnkb", "timestamp":"2024-11-12 17:42:41.305044817 +0000 UTC"}, Hostname:"ip-172-31-27-255", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.335 [INFO][5215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.473 [INFO][5215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.473 [INFO][5215] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-255' Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.479 [INFO][5215] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.517 [INFO][5215] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.545 [INFO][5215] ipam/ipam.go 489: Trying affinity for 192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.551 [INFO][5215] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.576 [INFO][5215] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.64/26 host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.577 [INFO][5215] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.64/26 handle="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.588 [INFO][5215] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.615 [INFO][5215] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.64/26 handle="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.637 [INFO][5215] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.70/26] block=192.168.50.64/26 handle="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.637 [INFO][5215] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.70/26] handle="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" host="ip-172-31-27-255" Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.637 [INFO][5215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:41.737687 containerd[2016]: 2024-11-12 17:42:41.637 [INFO][5215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.70/26] IPv6=[] ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" HandleID="k8s-pod-network.48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.644 [INFO][5190] cni-plugin/k8s.go 386: Populated endpoint ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee626534-e22b-45d6-8fe0-7a3b6b222819", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"", Pod:"calico-apiserver-5bf9fb5585-tpnkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5af3291f75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.644 [INFO][5190] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.70/32] ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.644 [INFO][5190] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5af3291f75b ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.685 [INFO][5190] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.695 [INFO][5190] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee626534-e22b-45d6-8fe0-7a3b6b222819", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb", Pod:"calico-apiserver-5bf9fb5585-tpnkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5af3291f75b", MAC:"fa:69:ef:cb:51:c9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:41.740898 containerd[2016]: 2024-11-12 17:42:41.728 [INFO][5190] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb" Namespace="calico-apiserver" Pod="calico-apiserver-5bf9fb5585-tpnkb" WorkloadEndpoint="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:41.766539 systemd[1]: Started cri-containerd-d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd.scope - libcontainer container d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd. Nov 12 17:42:41.817598 containerd[2016]: time="2024-11-12T17:42:41.817528676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-mh7mm,Uid:86515ae8-4b89-4d0b-9abb-d58cd726eec7,Namespace:kube-system,Attempt:1,} returns sandbox id \"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c\"" Nov 12 17:42:41.826409 containerd[2016]: time="2024-11-12T17:42:41.826329320Z" level=info msg="CreateContainer within sandbox \"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:42:41.920477 containerd[2016]: time="2024-11-12T17:42:41.920311484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:41.921530 containerd[2016]: time="2024-11-12T17:42:41.921419492Z" level=info msg="CreateContainer within sandbox \"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"981a0d79101263b27170a295e6e84caaafbab7805858f237c9b11ae535f9229f\"" Nov 12 17:42:41.924050 systemd-networkd[1919]: cali1ef2d131237: Gained IPv6LL Nov 12 17:42:41.929707 containerd[2016]: time="2024-11-12T17:42:41.928990628Z" level=info msg="StartContainer for \"981a0d79101263b27170a295e6e84caaafbab7805858f237c9b11ae535f9229f\"" Nov 12 17:42:41.930280 containerd[2016]: time="2024-11-12T17:42:41.920422364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:41.930280 containerd[2016]: time="2024-11-12T17:42:41.927489548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.930280 containerd[2016]: time="2024-11-12T17:42:41.927749612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:41.989007 containerd[2016]: time="2024-11-12T17:42:41.987408548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gjlz6,Uid:135a017b-df90-4603-90c6-655608f495ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd\"" Nov 12 17:42:42.011572 systemd[1]: Started cri-containerd-48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb.scope - libcontainer container 48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb. Nov 12 17:42:42.099138 systemd[1]: Started cri-containerd-981a0d79101263b27170a295e6e84caaafbab7805858f237c9b11ae535f9229f.scope - libcontainer container 981a0d79101263b27170a295e6e84caaafbab7805858f237c9b11ae535f9229f. Nov 12 17:42:42.183260 containerd[2016]: time="2024-11-12T17:42:42.183190937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bf9fb5585-tpnkb,Uid:ee626534-e22b-45d6-8fe0-7a3b6b222819,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb\"" Nov 12 17:42:42.212411 containerd[2016]: time="2024-11-12T17:42:42.212237634Z" level=info msg="StartContainer for \"981a0d79101263b27170a295e6e84caaafbab7805858f237c9b11ae535f9229f\" returns successfully" Nov 12 17:42:42.692625 systemd-networkd[1919]: cali014fa38eb28: Gained IPv6LL Nov 12 17:42:42.822400 systemd-networkd[1919]: cali398af73566b: Gained IPv6LL Nov 12 17:42:42.867140 systemd[1]: run-containerd-runc-k8s.io-48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb-runc.j6XQMg.mount: Deactivated successfully. Nov 12 17:42:42.884169 systemd-networkd[1919]: cali5af3291f75b: Gained IPv6LL Nov 12 17:42:43.089484 kubelet[3383]: I1112 17:42:43.089404 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-mh7mm" podStartSLOduration=38.089331618 podStartE2EDuration="38.089331618s" podCreationTimestamp="2024-11-12 17:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:43.08858097 +0000 UTC m=+49.790633384" watchObservedRunningTime="2024-11-12 17:42:43.089331618 +0000 UTC m=+49.791384020" Nov 12 17:42:43.337685 containerd[2016]: time="2024-11-12T17:42:43.337596463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.339883 containerd[2016]: time="2024-11-12T17:42:43.339618559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 17:42:43.341619 containerd[2016]: time="2024-11-12T17:42:43.341518003Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.346018 containerd[2016]: time="2024-11-12T17:42:43.345957835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:43.348908 containerd[2016]: time="2024-11-12T17:42:43.347657347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 5.038134865s" Nov 12 17:42:43.348908 containerd[2016]: time="2024-11-12T17:42:43.347773147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:42:43.350360 containerd[2016]: time="2024-11-12T17:42:43.350058883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 17:42:43.353002 containerd[2016]: time="2024-11-12T17:42:43.352588147Z" level=info msg="CreateContainer within sandbox \"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:42:43.381329 containerd[2016]: time="2024-11-12T17:42:43.381241747Z" level=info msg="CreateContainer within sandbox \"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5811224a801b9aaca2390bab85408bebbc3cecc87c9287887988ad7498ff61d8\"" Nov 12 17:42:43.382944 containerd[2016]: time="2024-11-12T17:42:43.382158943Z" level=info msg="StartContainer for \"5811224a801b9aaca2390bab85408bebbc3cecc87c9287887988ad7498ff61d8\"" Nov 12 17:42:43.456219 systemd[1]: Started cri-containerd-5811224a801b9aaca2390bab85408bebbc3cecc87c9287887988ad7498ff61d8.scope - libcontainer container 5811224a801b9aaca2390bab85408bebbc3cecc87c9287887988ad7498ff61d8. Nov 12 17:42:43.526764 containerd[2016]: time="2024-11-12T17:42:43.526361276Z" level=info msg="StartContainer for \"5811224a801b9aaca2390bab85408bebbc3cecc87c9287887988ad7498ff61d8\" returns successfully" Nov 12 17:42:44.092020 kubelet[3383]: I1112 17:42:44.091939 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bf9fb5585-5crpq" podStartSLOduration=25.05152493 podStartE2EDuration="30.091812463s" podCreationTimestamp="2024-11-12 17:42:14 +0000 UTC" firstStartedPulling="2024-11-12 17:42:38.308054834 +0000 UTC m=+45.010107236" lastFinishedPulling="2024-11-12 17:42:43.348342367 +0000 UTC m=+50.050394769" observedRunningTime="2024-11-12 17:42:44.089769019 +0000 UTC m=+50.791821445" watchObservedRunningTime="2024-11-12 17:42:44.091812463 +0000 UTC m=+50.793864877" Nov 12 17:42:44.435560 systemd[1]: Started sshd@9-172.31.27.255:22-139.178.89.65:57462.service - OpenSSH per-connection server daemon (139.178.89.65:57462). Nov 12 17:42:44.652780 sshd[5485]: Accepted publickey for core from 139.178.89.65 port 57462 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:44.657901 sshd[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:44.670395 systemd-logind[1996]: New session 10 of user core. Nov 12 17:42:44.677207 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:42:45.017889 sshd[5485]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:45.026588 systemd-logind[1996]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:42:45.027292 systemd[1]: sshd@9-172.31.27.255:22-139.178.89.65:57462.service: Deactivated successfully. Nov 12 17:42:45.039481 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:42:45.061583 systemd-logind[1996]: Removed session 10. Nov 12 17:42:45.071538 systemd[1]: Started sshd@10-172.31.27.255:22-139.178.89.65:57476.service - OpenSSH per-connection server daemon (139.178.89.65:57476). Nov 12 17:42:45.082892 kubelet[3383]: I1112 17:42:45.082056 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:45.285759 sshd[5505]: Accepted publickey for core from 139.178.89.65 port 57476 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:45.292764 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:45.315649 systemd-logind[1996]: New session 11 of user core. Nov 12 17:42:45.347108 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:42:45.769700 ntpd[1989]: Listen normally on 10 cali31188fe555b [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 10 cali31188fe555b [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 11 cali8ff7367f0aa [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 12 cali1ef2d131237 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 13 cali014fa38eb28 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 14 cali398af73566b [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:42:45.771814 ntpd[1989]: 12 Nov 17:42:45 ntpd[1989]: Listen normally on 15 cali5af3291f75b [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:42:45.769822 ntpd[1989]: Listen normally on 11 cali8ff7367f0aa [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:42:45.769963 ntpd[1989]: Listen normally on 12 cali1ef2d131237 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:42:45.770041 ntpd[1989]: Listen normally on 13 cali014fa38eb28 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:42:45.770113 ntpd[1989]: Listen normally on 14 cali398af73566b [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:42:45.770192 ntpd[1989]: Listen normally on 15 cali5af3291f75b [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:42:45.839063 sshd[5505]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:45.846822 systemd[1]: sshd@10-172.31.27.255:22-139.178.89.65:57476.service: Deactivated successfully. Nov 12 17:42:45.855098 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:42:45.860925 systemd-logind[1996]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:42:45.893492 systemd[1]: Started sshd@11-172.31.27.255:22-139.178.89.65:57480.service - OpenSSH per-connection server daemon (139.178.89.65:57480). Nov 12 17:42:45.899215 systemd-logind[1996]: Removed session 11. Nov 12 17:42:46.112273 sshd[5538]: Accepted publickey for core from 139.178.89.65 port 57480 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:46.117704 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:46.135196 systemd-logind[1996]: New session 12 of user core. Nov 12 17:42:46.141156 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:42:46.507612 sshd[5538]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:46.518222 systemd[1]: sshd@11-172.31.27.255:22-139.178.89.65:57480.service: Deactivated successfully. Nov 12 17:42:46.535784 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:42:46.546677 systemd-logind[1996]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:42:46.551233 systemd-logind[1996]: Removed session 12. Nov 12 17:42:47.862024 containerd[2016]: time="2024-11-12T17:42:47.861938474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:47.864351 containerd[2016]: time="2024-11-12T17:42:47.864122678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 17:42:47.866536 containerd[2016]: time="2024-11-12T17:42:47.866385854Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:47.873074 containerd[2016]: time="2024-11-12T17:42:47.873014342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:47.876276 containerd[2016]: time="2024-11-12T17:42:47.876064142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 4.525932731s" Nov 12 17:42:47.876276 containerd[2016]: time="2024-11-12T17:42:47.876134666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 17:42:47.880992 containerd[2016]: time="2024-11-12T17:42:47.880141418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 17:42:47.927180 containerd[2016]: time="2024-11-12T17:42:47.926698718Z" level=info msg="CreateContainer within sandbox \"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 17:42:47.962938 containerd[2016]: time="2024-11-12T17:42:47.962431130Z" level=info msg="CreateContainer within sandbox \"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4ab4209dd2c119a47c82b23dc3a70deee3634d98a2474bce2d0333a8ba267b19\"" Nov 12 17:42:47.965213 containerd[2016]: time="2024-11-12T17:42:47.965134586Z" level=info msg="StartContainer for \"4ab4209dd2c119a47c82b23dc3a70deee3634d98a2474bce2d0333a8ba267b19\"" Nov 12 17:42:47.981126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount804087672.mount: Deactivated successfully. Nov 12 17:42:48.058935 systemd[1]: Started cri-containerd-4ab4209dd2c119a47c82b23dc3a70deee3634d98a2474bce2d0333a8ba267b19.scope - libcontainer container 4ab4209dd2c119a47c82b23dc3a70deee3634d98a2474bce2d0333a8ba267b19. Nov 12 17:42:48.185624 containerd[2016]: time="2024-11-12T17:42:48.184956683Z" level=info msg="StartContainer for \"4ab4209dd2c119a47c82b23dc3a70deee3634d98a2474bce2d0333a8ba267b19\" returns successfully" Nov 12 17:42:49.176382 kubelet[3383]: I1112 17:42:49.176067 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-847b7f5879-lngzn" podStartSLOduration=28.619185114 podStartE2EDuration="36.175635504s" podCreationTimestamp="2024-11-12 17:42:13 +0000 UTC" firstStartedPulling="2024-11-12 17:42:40.320566084 +0000 UTC m=+47.022618498" lastFinishedPulling="2024-11-12 17:42:47.877016474 +0000 UTC m=+54.579068888" observedRunningTime="2024-11-12 17:42:49.171910908 +0000 UTC m=+55.873963334" watchObservedRunningTime="2024-11-12 17:42:49.175635504 +0000 UTC m=+55.877687942" Nov 12 17:42:49.472768 containerd[2016]: time="2024-11-12T17:42:49.472274270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:49.474558 containerd[2016]: time="2024-11-12T17:42:49.474485306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 17:42:49.475388 containerd[2016]: time="2024-11-12T17:42:49.475162286Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:49.481882 containerd[2016]: time="2024-11-12T17:42:49.481603010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:49.483725 containerd[2016]: time="2024-11-12T17:42:49.483441206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.603224344s" Nov 12 17:42:49.483725 containerd[2016]: time="2024-11-12T17:42:49.483516566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 17:42:49.485041 containerd[2016]: time="2024-11-12T17:42:49.484635518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:42:49.488905 containerd[2016]: time="2024-11-12T17:42:49.488720882Z" level=info msg="CreateContainer within sandbox \"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 17:42:49.540994 containerd[2016]: time="2024-11-12T17:42:49.540910406Z" level=info msg="CreateContainer within sandbox \"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"019c511ce57f4ea4587949793e9f1655a8a27d0aa6d393c77dc3dec71aaf06bb\"" Nov 12 17:42:49.543446 containerd[2016]: time="2024-11-12T17:42:49.542628818Z" level=info msg="StartContainer for \"019c511ce57f4ea4587949793e9f1655a8a27d0aa6d393c77dc3dec71aaf06bb\"" Nov 12 17:42:49.605148 systemd[1]: Started cri-containerd-019c511ce57f4ea4587949793e9f1655a8a27d0aa6d393c77dc3dec71aaf06bb.scope - libcontainer container 019c511ce57f4ea4587949793e9f1655a8a27d0aa6d393c77dc3dec71aaf06bb. Nov 12 17:42:49.674237 containerd[2016]: time="2024-11-12T17:42:49.673943007Z" level=info msg="StartContainer for \"019c511ce57f4ea4587949793e9f1655a8a27d0aa6d393c77dc3dec71aaf06bb\" returns successfully" Nov 12 17:42:49.828926 containerd[2016]: time="2024-11-12T17:42:49.828731487Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:49.831301 containerd[2016]: time="2024-11-12T17:42:49.830333451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 17:42:49.839626 containerd[2016]: time="2024-11-12T17:42:49.839534763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 354.821065ms" Nov 12 17:42:49.839626 containerd[2016]: time="2024-11-12T17:42:49.839612907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:42:49.841486 containerd[2016]: time="2024-11-12T17:42:49.841214691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 17:42:49.844209 containerd[2016]: time="2024-11-12T17:42:49.843820731Z" level=info msg="CreateContainer within sandbox \"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:42:49.866404 containerd[2016]: time="2024-11-12T17:42:49.866202064Z" level=info msg="CreateContainer within sandbox \"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0d1d870ca8e7ade36745110924d8699c80b5c649086ebf936a9e4840cdd123b7\"" Nov 12 17:42:49.867363 containerd[2016]: time="2024-11-12T17:42:49.867282328Z" level=info msg="StartContainer for \"0d1d870ca8e7ade36745110924d8699c80b5c649086ebf936a9e4840cdd123b7\"" Nov 12 17:42:49.935234 systemd[1]: Started cri-containerd-0d1d870ca8e7ade36745110924d8699c80b5c649086ebf936a9e4840cdd123b7.scope - libcontainer container 0d1d870ca8e7ade36745110924d8699c80b5c649086ebf936a9e4840cdd123b7. Nov 12 17:42:50.027915 containerd[2016]: time="2024-11-12T17:42:50.027745836Z" level=info msg="StartContainer for \"0d1d870ca8e7ade36745110924d8699c80b5c649086ebf936a9e4840cdd123b7\" returns successfully" Nov 12 17:42:50.188008 kubelet[3383]: I1112 17:42:50.183022 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bf9fb5585-tpnkb" podStartSLOduration=28.535883755 podStartE2EDuration="36.182955433s" podCreationTimestamp="2024-11-12 17:42:14 +0000 UTC" firstStartedPulling="2024-11-12 17:42:42.193115393 +0000 UTC m=+48.895167795" lastFinishedPulling="2024-11-12 17:42:49.840187059 +0000 UTC m=+56.542239473" observedRunningTime="2024-11-12 17:42:50.175408429 +0000 UTC m=+56.877460831" watchObservedRunningTime="2024-11-12 17:42:50.182955433 +0000 UTC m=+56.885007835" Nov 12 17:42:51.151306 kubelet[3383]: I1112 17:42:51.150135 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:51.470301 containerd[2016]: time="2024-11-12T17:42:51.469919596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:51.476608 containerd[2016]: time="2024-11-12T17:42:51.476518420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 17:42:51.489325 containerd[2016]: time="2024-11-12T17:42:51.488707168Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:51.500670 containerd[2016]: time="2024-11-12T17:42:51.500590252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:51.505018 containerd[2016]: time="2024-11-12T17:42:51.504766852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 1.663488069s" Nov 12 17:42:51.505018 containerd[2016]: time="2024-11-12T17:42:51.504862096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 17:42:51.511823 containerd[2016]: time="2024-11-12T17:42:51.511513648Z" level=info msg="CreateContainer within sandbox \"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 17:42:51.555538 containerd[2016]: time="2024-11-12T17:42:51.554971708Z" level=info msg="CreateContainer within sandbox \"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c7a31225e093d862e58e3220c6699a23a7e267fe794dffc12d1e3f697b6135cb\"" Nov 12 17:42:51.561875 containerd[2016]: time="2024-11-12T17:42:51.557953120Z" level=info msg="StartContainer for \"c7a31225e093d862e58e3220c6699a23a7e267fe794dffc12d1e3f697b6135cb\"" Nov 12 17:42:51.586958 systemd[1]: Started sshd@12-172.31.27.255:22-139.178.89.65:41212.service - OpenSSH per-connection server daemon (139.178.89.65:41212). Nov 12 17:42:51.685548 systemd[1]: Started cri-containerd-c7a31225e093d862e58e3220c6699a23a7e267fe794dffc12d1e3f697b6135cb.scope - libcontainer container c7a31225e093d862e58e3220c6699a23a7e267fe794dffc12d1e3f697b6135cb. Nov 12 17:42:51.762818 containerd[2016]: time="2024-11-12T17:42:51.762605885Z" level=info msg="StartContainer for \"c7a31225e093d862e58e3220c6699a23a7e267fe794dffc12d1e3f697b6135cb\" returns successfully" Nov 12 17:42:51.834170 sshd[5702]: Accepted publickey for core from 139.178.89.65 port 41212 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:51.840138 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:51.854635 systemd-logind[1996]: New session 13 of user core. Nov 12 17:42:51.864162 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:42:52.200380 kubelet[3383]: I1112 17:42:52.200294 3383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-gjlz6" podStartSLOduration=29.688919515 podStartE2EDuration="39.200229783s" podCreationTimestamp="2024-11-12 17:42:13 +0000 UTC" firstStartedPulling="2024-11-12 17:42:41.993868016 +0000 UTC m=+48.695920406" lastFinishedPulling="2024-11-12 17:42:51.505178272 +0000 UTC m=+58.207230674" observedRunningTime="2024-11-12 17:42:52.196614615 +0000 UTC m=+58.898667029" watchObservedRunningTime="2024-11-12 17:42:52.200229783 +0000 UTC m=+58.902282185" Nov 12 17:42:52.260420 sshd[5702]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:52.270018 systemd-logind[1996]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:42:52.272549 systemd[1]: sshd@12-172.31.27.255:22-139.178.89.65:41212.service: Deactivated successfully. Nov 12 17:42:52.283872 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:42:52.287482 systemd-logind[1996]: Removed session 13. Nov 12 17:42:52.771758 kubelet[3383]: I1112 17:42:52.771504 3383 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 17:42:52.771758 kubelet[3383]: I1112 17:42:52.771580 3383 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 17:42:53.543535 containerd[2016]: time="2024-11-12T17:42:53.543168654Z" level=info msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.616 [WARNING][5767] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4", Pod:"calico-apiserver-5bf9fb5585-5crpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31188fe555b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.616 [INFO][5767] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.616 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" iface="eth0" netns="" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.616 [INFO][5767] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.616 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.656 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.656 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.656 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.668 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.668 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.671 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:53.677424 containerd[2016]: 2024-11-12 17:42:53.673 [INFO][5767] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.677424 containerd[2016]: time="2024-11-12T17:42:53.677219515Z" level=info msg="TearDown network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" successfully" Nov 12 17:42:53.677424 containerd[2016]: time="2024-11-12T17:42:53.677257843Z" level=info msg="StopPodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" returns successfully" Nov 12 17:42:53.680409 containerd[2016]: time="2024-11-12T17:42:53.679551691Z" level=info msg="RemovePodSandbox for \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" Nov 12 17:42:53.680409 containerd[2016]: time="2024-11-12T17:42:53.679617307Z" level=info msg="Forcibly stopping sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\"" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.755 [WARNING][5793] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"835e0a4e-90ef-4bd2-addb-1fda5f27cb7a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"b38b067222bcbd374f48f7cc924b40b6ca38f7516d4f729864d7cee0b0a60cd4", Pod:"calico-apiserver-5bf9fb5585-5crpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31188fe555b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.757 [INFO][5793] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.757 [INFO][5793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" iface="eth0" netns="" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.757 [INFO][5793] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.757 [INFO][5793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.802 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.803 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.803 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.821 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.821 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" HandleID="k8s-pod-network.240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--5crpq-eth0" Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.824 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:53.831792 containerd[2016]: 2024-11-12 17:42:53.828 [INFO][5793] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4" Nov 12 17:42:53.831792 containerd[2016]: time="2024-11-12T17:42:53.831516703Z" level=info msg="TearDown network for sandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" successfully" Nov 12 17:42:53.836743 containerd[2016]: time="2024-11-12T17:42:53.836657035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:53.837351 containerd[2016]: time="2024-11-12T17:42:53.837201835Z" level=info msg="RemovePodSandbox \"240550aaca81ab33d2661250ad64821c0b81d8e1446993b52695d49bfec372e4\" returns successfully" Nov 12 17:42:53.838389 containerd[2016]: time="2024-11-12T17:42:53.838231135Z" level=info msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.909 [WARNING][5817] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee626534-e22b-45d6-8fe0-7a3b6b222819", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb", Pod:"calico-apiserver-5bf9fb5585-tpnkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5af3291f75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.910 [INFO][5817] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.910 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" iface="eth0" netns="" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.910 [INFO][5817] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.910 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.964 [INFO][5823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.965 [INFO][5823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.965 [INFO][5823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.980 [WARNING][5823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.980 [INFO][5823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.982 [INFO][5823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:53.987659 containerd[2016]: 2024-11-12 17:42:53.985 [INFO][5817] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:53.988536 containerd[2016]: time="2024-11-12T17:42:53.987716312Z" level=info msg="TearDown network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" successfully" Nov 12 17:42:53.988536 containerd[2016]: time="2024-11-12T17:42:53.987754556Z" level=info msg="StopPodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" returns successfully" Nov 12 17:42:53.988980 containerd[2016]: time="2024-11-12T17:42:53.988786448Z" level=info msg="RemovePodSandbox for \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" Nov 12 17:42:53.989177 containerd[2016]: time="2024-11-12T17:42:53.988981172Z" level=info msg="Forcibly stopping sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\"" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.051 [WARNING][5841] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0", GenerateName:"calico-apiserver-5bf9fb5585-", Namespace:"calico-apiserver", SelfLink:"", UID:"ee626534-e22b-45d6-8fe0-7a3b6b222819", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bf9fb5585", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"48016bbf433a8fd605702a41ff3b93dbc76476cab6206e1aca4937246b73a0eb", Pod:"calico-apiserver-5bf9fb5585-tpnkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5af3291f75b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.051 [INFO][5841] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.051 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" iface="eth0" netns="" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.051 [INFO][5841] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.052 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.104 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.104 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.105 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.134 [WARNING][5847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.134 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" HandleID="k8s-pod-network.d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Workload="ip--172--31--27--255-k8s-calico--apiserver--5bf9fb5585--tpnkb-eth0" Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.136 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.154934 containerd[2016]: 2024-11-12 17:42:54.144 [INFO][5841] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb" Nov 12 17:42:54.158527 containerd[2016]: time="2024-11-12T17:42:54.154930997Z" level=info msg="TearDown network for sandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" successfully" Nov 12 17:42:54.167079 containerd[2016]: time="2024-11-12T17:42:54.167020073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:54.167413 containerd[2016]: time="2024-11-12T17:42:54.167373521Z" level=info msg="RemovePodSandbox \"d2b4ae93efedfba6e1c6f7ad52171dbcba8333e1a1f28be0068eb50275abc6cb\" returns successfully" Nov 12 17:42:54.169590 containerd[2016]: time="2024-11-12T17:42:54.169539005Z" level=info msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.240 [WARNING][5865] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4c15ea44-79b4-4078-bf3c-e9244ce9a92c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e", Pod:"coredns-76f75df574-rk7cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff7367f0aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.240 [INFO][5865] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.240 [INFO][5865] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" iface="eth0" netns="" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.240 [INFO][5865] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.240 [INFO][5865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.279 [INFO][5871] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.280 [INFO][5871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.280 [INFO][5871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.292 [WARNING][5871] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.292 [INFO][5871] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.295 [INFO][5871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.300464 containerd[2016]: 2024-11-12 17:42:54.297 [INFO][5865] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.303282 containerd[2016]: time="2024-11-12T17:42:54.300512322Z" level=info msg="TearDown network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" successfully" Nov 12 17:42:54.303282 containerd[2016]: time="2024-11-12T17:42:54.300550170Z" level=info msg="StopPodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" returns successfully" Nov 12 17:42:54.303282 containerd[2016]: time="2024-11-12T17:42:54.301319730Z" level=info msg="RemovePodSandbox for \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" Nov 12 17:42:54.303282 containerd[2016]: time="2024-11-12T17:42:54.301373382Z" level=info msg="Forcibly stopping sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\"" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.399 [WARNING][5895] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4c15ea44-79b4-4078-bf3c-e9244ce9a92c", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"0a73470e67cceec0eb5339f96bc9035f0cff333ede507b1d508a5b0a01f0645e", Pod:"coredns-76f75df574-rk7cj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8ff7367f0aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.399 [INFO][5895] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.399 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" iface="eth0" netns="" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.399 [INFO][5895] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.399 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.439 [INFO][5916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.439 [INFO][5916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.439 [INFO][5916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.452 [WARNING][5916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.452 [INFO][5916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" HandleID="k8s-pod-network.b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--rk7cj-eth0" Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.454 [INFO][5916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.459815 containerd[2016]: 2024-11-12 17:42:54.457 [INFO][5895] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6" Nov 12 17:42:54.459815 containerd[2016]: time="2024-11-12T17:42:54.459599898Z" level=info msg="TearDown network for sandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" successfully" Nov 12 17:42:54.465373 containerd[2016]: time="2024-11-12T17:42:54.465309918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:54.465521 containerd[2016]: time="2024-11-12T17:42:54.465481878Z" level=info msg="RemovePodSandbox \"b641d98c7b0b13f81f54e0ed02d37ec25427733ad581171499ab0b10c88736d6\" returns successfully" Nov 12 17:42:54.466660 containerd[2016]: time="2024-11-12T17:42:54.466595058Z" level=info msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.529 [WARNING][5934] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135a017b-df90-4603-90c6-655608f495ae", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd", Pod:"csi-node-driver-gjlz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali398af73566b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.530 [INFO][5934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.530 [INFO][5934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" iface="eth0" netns="" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.530 [INFO][5934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.530 [INFO][5934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.567 [INFO][5940] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.567 [INFO][5940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.567 [INFO][5940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.581 [WARNING][5940] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.581 [INFO][5940] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.583 [INFO][5940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.588505 containerd[2016]: 2024-11-12 17:42:54.586 [INFO][5934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.590908 containerd[2016]: time="2024-11-12T17:42:54.588562267Z" level=info msg="TearDown network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" successfully" Nov 12 17:42:54.590908 containerd[2016]: time="2024-11-12T17:42:54.588600307Z" level=info msg="StopPodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" returns successfully" Nov 12 17:42:54.590908 containerd[2016]: time="2024-11-12T17:42:54.589336375Z" level=info msg="RemovePodSandbox for \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" Nov 12 17:42:54.590908 containerd[2016]: time="2024-11-12T17:42:54.589383391Z" level=info msg="Forcibly stopping sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\"" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.658 [WARNING][5958] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135a017b-df90-4603-90c6-655608f495ae", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"d6b8ad9187cccc13cc04c22efd7fd22657e1565c18658b0178bba27d38bc7fcd", Pod:"csi-node-driver-gjlz6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali398af73566b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.659 [INFO][5958] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.659 [INFO][5958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" iface="eth0" netns="" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.659 [INFO][5958] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.659 [INFO][5958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.695 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.695 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.695 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.710 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.710 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" HandleID="k8s-pod-network.dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Workload="ip--172--31--27--255-k8s-csi--node--driver--gjlz6-eth0" Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.712 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.718678 containerd[2016]: 2024-11-12 17:42:54.714 [INFO][5958] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4" Nov 12 17:42:54.718678 containerd[2016]: time="2024-11-12T17:42:54.717656768Z" level=info msg="TearDown network for sandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" successfully" Nov 12 17:42:54.722095 containerd[2016]: time="2024-11-12T17:42:54.722017688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:54.722386 containerd[2016]: time="2024-11-12T17:42:54.722335064Z" level=info msg="RemovePodSandbox \"dbdf92ef5b200287faf9c4dade1b952d57c6647a6d6585e8ccfd63bb1b5160e4\" returns successfully" Nov 12 17:42:54.723266 containerd[2016]: time="2024-11-12T17:42:54.723226460Z" level=info msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.808 [WARNING][5983] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86515ae8-4b89-4d0b-9abb-d58cd726eec7", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c", Pod:"coredns-76f75df574-mh7mm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014fa38eb28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.808 [INFO][5983] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.808 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" iface="eth0" netns="" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.808 [INFO][5983] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.808 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.850 [INFO][5994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.850 [INFO][5994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.851 [INFO][5994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.862 [WARNING][5994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.863 [INFO][5994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.865 [INFO][5994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.871242 containerd[2016]: 2024-11-12 17:42:54.868 [INFO][5983] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:54.871242 containerd[2016]: time="2024-11-12T17:42:54.871024340Z" level=info msg="TearDown network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" successfully" Nov 12 17:42:54.871242 containerd[2016]: time="2024-11-12T17:42:54.871066616Z" level=info msg="StopPodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" returns successfully" Nov 12 17:42:54.873380 containerd[2016]: time="2024-11-12T17:42:54.872586020Z" level=info msg="RemovePodSandbox for \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" Nov 12 17:42:54.873380 containerd[2016]: time="2024-11-12T17:42:54.872641220Z" level=info msg="Forcibly stopping sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\"" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.937 [WARNING][6013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"86515ae8-4b89-4d0b-9abb-d58cd726eec7", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a5b4d2ac99771f095b8b33e65429537059a66d02342789e169a30958d329062c", Pod:"coredns-76f75df574-mh7mm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali014fa38eb28", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.937 [INFO][6013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.937 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" iface="eth0" netns="" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.937 [INFO][6013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.937 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.983 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.984 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.984 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.998 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:54.998 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" HandleID="k8s-pod-network.959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Workload="ip--172--31--27--255-k8s-coredns--76f75df574--mh7mm-eth0" Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:55.000 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:55.006312 containerd[2016]: 2024-11-12 17:42:55.003 [INFO][6013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e" Nov 12 17:42:55.007243 containerd[2016]: time="2024-11-12T17:42:55.006305549Z" level=info msg="TearDown network for sandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" successfully" Nov 12 17:42:55.013028 containerd[2016]: time="2024-11-12T17:42:55.012948029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:55.013152 containerd[2016]: time="2024-11-12T17:42:55.013065257Z" level=info msg="RemovePodSandbox \"959315d43b7620f67eee0aa39a2fe8e5a633e8a327c8a0e5e002f3907ee1b18e\" returns successfully" Nov 12 17:42:55.013769 containerd[2016]: time="2024-11-12T17:42:55.013706501Z" level=info msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.079 [WARNING][6038] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0", GenerateName:"calico-kube-controllers-847b7f5879-", Namespace:"calico-system", SelfLink:"", UID:"26ead776-ee2c-425e-9c0d-03fa419f3738", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b7f5879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c", Pod:"calico-kube-controllers-847b7f5879-lngzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ef2d131237", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.079 [INFO][6038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.079 [INFO][6038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" iface="eth0" netns="" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.079 [INFO][6038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.079 [INFO][6038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.114 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.114 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.114 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.138 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.138 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.141 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:55.145332 containerd[2016]: 2024-11-12 17:42:55.143 [INFO][6038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.147072 containerd[2016]: time="2024-11-12T17:42:55.145392966Z" level=info msg="TearDown network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" successfully" Nov 12 17:42:55.147072 containerd[2016]: time="2024-11-12T17:42:55.145431486Z" level=info msg="StopPodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" returns successfully" Nov 12 17:42:55.147072 containerd[2016]: time="2024-11-12T17:42:55.146073342Z" level=info msg="RemovePodSandbox for \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" Nov 12 17:42:55.147072 containerd[2016]: time="2024-11-12T17:42:55.146117958Z" level=info msg="Forcibly stopping sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\"" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.216 [WARNING][6063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0", GenerateName:"calico-kube-controllers-847b7f5879-", Namespace:"calico-system", SelfLink:"", UID:"26ead776-ee2c-425e-9c0d-03fa419f3738", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"847b7f5879", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-255", ContainerID:"a52cc68d2973db5ab35a375df9629afd037b73aadd3e68d47e02000c5866db6c", Pod:"calico-kube-controllers-847b7f5879-lngzn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1ef2d131237", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.216 [INFO][6063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.216 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" iface="eth0" netns="" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.216 [INFO][6063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.216 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.250 [INFO][6069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.250 [INFO][6069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.250 [INFO][6069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.263 [WARNING][6069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.263 [INFO][6069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" HandleID="k8s-pod-network.10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Workload="ip--172--31--27--255-k8s-calico--kube--controllers--847b7f5879--lngzn-eth0" Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.269 [INFO][6069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:55.274358 containerd[2016]: 2024-11-12 17:42:55.271 [INFO][6063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696" Nov 12 17:42:55.274358 containerd[2016]: time="2024-11-12T17:42:55.274330950Z" level=info msg="TearDown network for sandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" successfully" Nov 12 17:42:55.279966 containerd[2016]: time="2024-11-12T17:42:55.279344106Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:55.279966 containerd[2016]: time="2024-11-12T17:42:55.280053006Z" level=info msg="RemovePodSandbox \"10d57f4ef10a104ff4a6d15ce8b1a3415dfcf06c2431abcfd5c8d237947b5696\" returns successfully" Nov 12 17:42:57.298362 systemd[1]: Started sshd@13-172.31.27.255:22-139.178.89.65:54772.service - OpenSSH per-connection server daemon (139.178.89.65:54772). Nov 12 17:42:57.490199 sshd[6076]: Accepted publickey for core from 139.178.89.65 port 54772 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:57.493633 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:57.503474 systemd-logind[1996]: New session 14 of user core. Nov 12 17:42:57.508134 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:42:57.768176 sshd[6076]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:57.776821 systemd-logind[1996]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:42:57.778689 systemd[1]: sshd@13-172.31.27.255:22-139.178.89.65:54772.service: Deactivated successfully. Nov 12 17:42:57.784601 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:42:57.787322 systemd-logind[1996]: Removed session 14. Nov 12 17:43:02.814581 systemd[1]: Started sshd@14-172.31.27.255:22-139.178.89.65:54782.service - OpenSSH per-connection server daemon (139.178.89.65:54782). Nov 12 17:43:03.007379 sshd[6090]: Accepted publickey for core from 139.178.89.65 port 54782 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:03.010125 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:03.021304 systemd-logind[1996]: New session 15 of user core. Nov 12 17:43:03.028293 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:43:03.130636 kubelet[3383]: I1112 17:43:03.130496 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:03.380555 sshd[6090]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:03.391337 systemd[1]: sshd@14-172.31.27.255:22-139.178.89.65:54782.service: Deactivated successfully. Nov 12 17:43:03.397813 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:43:03.402529 systemd-logind[1996]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:43:03.406103 systemd-logind[1996]: Removed session 15. Nov 12 17:43:08.427684 systemd[1]: Started sshd@15-172.31.27.255:22-139.178.89.65:44402.service - OpenSSH per-connection server daemon (139.178.89.65:44402). Nov 12 17:43:08.632732 sshd[6110]: Accepted publickey for core from 139.178.89.65 port 44402 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:08.636420 sshd[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:08.650716 systemd-logind[1996]: New session 16 of user core. Nov 12 17:43:08.658493 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:43:08.968571 sshd[6110]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:08.980421 systemd[1]: sshd@15-172.31.27.255:22-139.178.89.65:44402.service: Deactivated successfully. Nov 12 17:43:08.989168 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:43:08.995213 systemd-logind[1996]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:43:09.023534 systemd[1]: Started sshd@16-172.31.27.255:22-139.178.89.65:44406.service - OpenSSH per-connection server daemon (139.178.89.65:44406). Nov 12 17:43:09.027269 systemd-logind[1996]: Removed session 16. Nov 12 17:43:09.212491 sshd[6123]: Accepted publickey for core from 139.178.89.65 port 44406 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:09.216486 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:09.226301 systemd-logind[1996]: New session 17 of user core. Nov 12 17:43:09.235550 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:43:09.857222 sshd[6123]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:09.871549 systemd-logind[1996]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:43:09.873220 systemd[1]: sshd@16-172.31.27.255:22-139.178.89.65:44406.service: Deactivated successfully. Nov 12 17:43:09.883504 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:43:09.913055 systemd[1]: Started sshd@17-172.31.27.255:22-139.178.89.65:44420.service - OpenSSH per-connection server daemon (139.178.89.65:44420). Nov 12 17:43:09.915178 systemd-logind[1996]: Removed session 17. Nov 12 17:43:10.106653 sshd[6134]: Accepted publickey for core from 139.178.89.65 port 44420 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:10.110245 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:10.132195 systemd-logind[1996]: New session 18 of user core. Nov 12 17:43:10.140188 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:43:13.962337 sshd[6134]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:13.978398 systemd[1]: sshd@17-172.31.27.255:22-139.178.89.65:44420.service: Deactivated successfully. Nov 12 17:43:13.985461 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:43:13.986599 systemd[1]: session-18.scope: Consumed 1.138s CPU time. Nov 12 17:43:13.988214 systemd-logind[1996]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:43:14.026232 systemd[1]: Started sshd@18-172.31.27.255:22-139.178.89.65:44436.service - OpenSSH per-connection server daemon (139.178.89.65:44436). Nov 12 17:43:14.028693 systemd-logind[1996]: Removed session 18. Nov 12 17:43:14.216397 sshd[6151]: Accepted publickey for core from 139.178.89.65 port 44436 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:14.219801 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:14.230700 systemd-logind[1996]: New session 19 of user core. Nov 12 17:43:14.236208 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:43:14.808774 sshd[6151]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:14.819253 systemd[1]: sshd@18-172.31.27.255:22-139.178.89.65:44436.service: Deactivated successfully. Nov 12 17:43:14.822711 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:43:14.825998 systemd-logind[1996]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:43:14.829721 systemd-logind[1996]: Removed session 19. Nov 12 17:43:14.849740 systemd[1]: Started sshd@19-172.31.27.255:22-139.178.89.65:44452.service - OpenSSH per-connection server daemon (139.178.89.65:44452). Nov 12 17:43:15.032990 sshd[6170]: Accepted publickey for core from 139.178.89.65 port 44452 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:15.035814 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:15.044231 systemd-logind[1996]: New session 20 of user core. Nov 12 17:43:15.055126 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:43:15.369474 sshd[6170]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:15.380919 systemd[1]: sshd@19-172.31.27.255:22-139.178.89.65:44452.service: Deactivated successfully. Nov 12 17:43:15.390453 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:43:15.401178 systemd-logind[1996]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:43:15.406802 systemd-logind[1996]: Removed session 20. Nov 12 17:43:20.412367 systemd[1]: Started sshd@20-172.31.27.255:22-139.178.89.65:53420.service - OpenSSH per-connection server daemon (139.178.89.65:53420). Nov 12 17:43:20.593510 sshd[6203]: Accepted publickey for core from 139.178.89.65 port 53420 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:20.596451 sshd[6203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:20.604204 systemd-logind[1996]: New session 21 of user core. Nov 12 17:43:20.611092 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:43:20.862017 sshd[6203]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:20.870164 systemd[1]: sshd@20-172.31.27.255:22-139.178.89.65:53420.service: Deactivated successfully. Nov 12 17:43:20.875402 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:43:20.877713 systemd-logind[1996]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:43:20.879807 systemd-logind[1996]: Removed session 21. Nov 12 17:43:22.918215 kubelet[3383]: I1112 17:43:22.917733 3383 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:25.904580 systemd[1]: Started sshd@21-172.31.27.255:22-139.178.89.65:53434.service - OpenSSH per-connection server daemon (139.178.89.65:53434). Nov 12 17:43:26.101727 sshd[6242]: Accepted publickey for core from 139.178.89.65 port 53434 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:26.105158 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:26.118208 systemd-logind[1996]: New session 22 of user core. Nov 12 17:43:26.129161 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:43:26.449740 sshd[6242]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:26.462141 systemd[1]: sshd@21-172.31.27.255:22-139.178.89.65:53434.service: Deactivated successfully. Nov 12 17:43:26.468302 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:43:26.475031 systemd-logind[1996]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:43:26.479690 systemd-logind[1996]: Removed session 22. Nov 12 17:43:31.491384 systemd[1]: Started sshd@22-172.31.27.255:22-139.178.89.65:37202.service - OpenSSH per-connection server daemon (139.178.89.65:37202). Nov 12 17:43:31.674411 sshd[6257]: Accepted publickey for core from 139.178.89.65 port 37202 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:31.677226 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:31.687372 systemd-logind[1996]: New session 23 of user core. Nov 12 17:43:31.696245 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:43:31.952281 sshd[6257]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:31.959261 systemd[1]: sshd@22-172.31.27.255:22-139.178.89.65:37202.service: Deactivated successfully. Nov 12 17:43:31.965024 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:43:31.968347 systemd-logind[1996]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:43:31.971251 systemd-logind[1996]: Removed session 23. Nov 12 17:43:36.994374 systemd[1]: Started sshd@23-172.31.27.255:22-139.178.89.65:37216.service - OpenSSH per-connection server daemon (139.178.89.65:37216). Nov 12 17:43:37.189865 sshd[6272]: Accepted publickey for core from 139.178.89.65 port 37216 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:37.193627 sshd[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:37.202828 systemd-logind[1996]: New session 24 of user core. Nov 12 17:43:37.210136 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 17:43:37.466415 sshd[6272]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:37.473236 systemd[1]: sshd@23-172.31.27.255:22-139.178.89.65:37216.service: Deactivated successfully. Nov 12 17:43:37.476436 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 17:43:37.480027 systemd-logind[1996]: Session 24 logged out. Waiting for processes to exit. Nov 12 17:43:37.481918 systemd-logind[1996]: Removed session 24. Nov 12 17:43:42.512717 systemd[1]: Started sshd@24-172.31.27.255:22-139.178.89.65:43046.service - OpenSSH per-connection server daemon (139.178.89.65:43046). Nov 12 17:43:42.683940 sshd[6285]: Accepted publickey for core from 139.178.89.65 port 43046 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:42.687544 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:42.697223 systemd-logind[1996]: New session 25 of user core. Nov 12 17:43:42.704233 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 17:43:42.955272 sshd[6285]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:42.962384 systemd[1]: sshd@24-172.31.27.255:22-139.178.89.65:43046.service: Deactivated successfully. Nov 12 17:43:42.967075 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 17:43:42.969591 systemd-logind[1996]: Session 25 logged out. Waiting for processes to exit. Nov 12 17:43:42.972497 systemd-logind[1996]: Removed session 25. Nov 12 17:43:47.998514 systemd[1]: Started sshd@25-172.31.27.255:22-139.178.89.65:46736.service - OpenSSH per-connection server daemon (139.178.89.65:46736). Nov 12 17:43:48.182830 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 46736 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:48.186298 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:48.195165 systemd-logind[1996]: New session 26 of user core. Nov 12 17:43:48.203139 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 17:43:48.462729 sshd[6320]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:48.471102 systemd[1]: sshd@25-172.31.27.255:22-139.178.89.65:46736.service: Deactivated successfully. Nov 12 17:43:48.475173 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 17:43:48.479318 systemd-logind[1996]: Session 26 logged out. Waiting for processes to exit. Nov 12 17:43:48.482360 systemd-logind[1996]: Removed session 26. Nov 12 17:44:01.999679 systemd[1]: cri-containerd-bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5.scope: Deactivated successfully. Nov 12 17:44:02.000237 systemd[1]: cri-containerd-bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5.scope: Consumed 5.134s CPU time, 22.4M memory peak, 0B memory swap peak. Nov 12 17:44:02.056592 containerd[2016]: time="2024-11-12T17:44:02.056449846Z" level=info msg="shim disconnected" id=bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5 namespace=k8s.io Nov 12 17:44:02.056592 containerd[2016]: time="2024-11-12T17:44:02.056572582Z" level=warning msg="cleaning up after shim disconnected" id=bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5 namespace=k8s.io Nov 12 17:44:02.056592 containerd[2016]: time="2024-11-12T17:44:02.056600050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:02.065574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5-rootfs.mount: Deactivated successfully. Nov 12 17:44:02.431103 kubelet[3383]: I1112 17:44:02.429735 3383 scope.go:117] "RemoveContainer" containerID="bdff24a08dcbc526e18dc06b93a4599b7199143521ffcc2ace8d149d7a3ba3b5" Nov 12 17:44:02.436993 containerd[2016]: time="2024-11-12T17:44:02.436927356Z" level=info msg="CreateContainer within sandbox \"49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 17:44:02.462424 containerd[2016]: time="2024-11-12T17:44:02.462205392Z" level=info msg="CreateContainer within sandbox \"49e07a9300efc4cca3968fba7dd975369a28a2053738b5592c24f3c7f6ea0604\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6cb8b351514236793be19e7a1ffa14f2f3970e6b80518d9ab6862210db54d554\"" Nov 12 17:44:02.463592 containerd[2016]: time="2024-11-12T17:44:02.463505520Z" level=info msg="StartContainer for \"6cb8b351514236793be19e7a1ffa14f2f3970e6b80518d9ab6862210db54d554\"" Nov 12 17:44:02.529587 systemd[1]: Started cri-containerd-6cb8b351514236793be19e7a1ffa14f2f3970e6b80518d9ab6862210db54d554.scope - libcontainer container 6cb8b351514236793be19e7a1ffa14f2f3970e6b80518d9ab6862210db54d554. Nov 12 17:44:02.610934 containerd[2016]: time="2024-11-12T17:44:02.610698553Z" level=info msg="StartContainer for \"6cb8b351514236793be19e7a1ffa14f2f3970e6b80518d9ab6862210db54d554\" returns successfully" Nov 12 17:44:03.703717 systemd[1]: cri-containerd-c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6.scope: Deactivated successfully. Nov 12 17:44:03.704326 systemd[1]: cri-containerd-c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6.scope: Consumed 7.741s CPU time. Nov 12 17:44:03.758659 containerd[2016]: time="2024-11-12T17:44:03.758460987Z" level=info msg="shim disconnected" id=c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6 namespace=k8s.io Nov 12 17:44:03.758659 containerd[2016]: time="2024-11-12T17:44:03.758651475Z" level=warning msg="cleaning up after shim disconnected" id=c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6 namespace=k8s.io Nov 12 17:44:03.758659 containerd[2016]: time="2024-11-12T17:44:03.758674155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:03.766001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6-rootfs.mount: Deactivated successfully. Nov 12 17:44:04.443991 kubelet[3383]: I1112 17:44:04.443531 3383 scope.go:117] "RemoveContainer" containerID="c2e09c782e546986cb0438f3922307173374114bf0bc4fdf4940b0ec7411abc6" Nov 12 17:44:04.449303 containerd[2016]: time="2024-11-12T17:44:04.449240174Z" level=info msg="CreateContainer within sandbox \"17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 17:44:04.474794 containerd[2016]: time="2024-11-12T17:44:04.474715682Z" level=info msg="CreateContainer within sandbox \"17f16fee9de9021f3f53971691bad270c8fcfab87552ff7373a1fb52c2ac8bdd\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"4e9677ae1c45127d96744ed76b305d2c20349ad9443ac961e7d643f439f90b3c\"" Nov 12 17:44:04.480734 containerd[2016]: time="2024-11-12T17:44:04.480664454Z" level=info msg="StartContainer for \"4e9677ae1c45127d96744ed76b305d2c20349ad9443ac961e7d643f439f90b3c\"" Nov 12 17:44:04.559198 systemd[1]: Started cri-containerd-4e9677ae1c45127d96744ed76b305d2c20349ad9443ac961e7d643f439f90b3c.scope - libcontainer container 4e9677ae1c45127d96744ed76b305d2c20349ad9443ac961e7d643f439f90b3c. Nov 12 17:44:04.621978 containerd[2016]: time="2024-11-12T17:44:04.621896355Z" level=info msg="StartContainer for \"4e9677ae1c45127d96744ed76b305d2c20349ad9443ac961e7d643f439f90b3c\" returns successfully" Nov 12 17:44:06.114855 kubelet[3383]: E1112 17:44:06.114773 3383 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 12 17:44:07.183773 systemd[1]: cri-containerd-f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8.scope: Deactivated successfully. Nov 12 17:44:07.186101 systemd[1]: cri-containerd-f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8.scope: Consumed 3.118s CPU time, 15.7M memory peak, 0B memory swap peak. Nov 12 17:44:07.234101 containerd[2016]: time="2024-11-12T17:44:07.232567456Z" level=info msg="shim disconnected" id=f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8 namespace=k8s.io Nov 12 17:44:07.234101 containerd[2016]: time="2024-11-12T17:44:07.232659124Z" level=warning msg="cleaning up after shim disconnected" id=f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8 namespace=k8s.io Nov 12 17:44:07.234101 containerd[2016]: time="2024-11-12T17:44:07.232686856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:07.240638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8-rootfs.mount: Deactivated successfully. Nov 12 17:44:07.458945 kubelet[3383]: I1112 17:44:07.458793 3383 scope.go:117] "RemoveContainer" containerID="f3cf42f7c6d8455427569f5e9434626496426d83eaf2e21e55ef8bc8e9121fc8" Nov 12 17:44:07.464700 containerd[2016]: time="2024-11-12T17:44:07.464193785Z" level=info msg="CreateContainer within sandbox \"90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 17:44:07.486880 containerd[2016]: time="2024-11-12T17:44:07.485605865Z" level=info msg="CreateContainer within sandbox \"90371a8754523d695a65c32f84de46dafa549470f94e49aa9f021bad40e9aa91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6877b6f41c4b5fa9c9e0f3d75457f4d9a6d7257c34f88ac91b1be9bd0920eeab\"" Nov 12 17:44:07.489595 containerd[2016]: time="2024-11-12T17:44:07.489524285Z" level=info msg="StartContainer for \"6877b6f41c4b5fa9c9e0f3d75457f4d9a6d7257c34f88ac91b1be9bd0920eeab\"" Nov 12 17:44:07.570194 systemd[1]: Started cri-containerd-6877b6f41c4b5fa9c9e0f3d75457f4d9a6d7257c34f88ac91b1be9bd0920eeab.scope - libcontainer container 6877b6f41c4b5fa9c9e0f3d75457f4d9a6d7257c34f88ac91b1be9bd0920eeab. Nov 12 17:44:07.639624 containerd[2016]: time="2024-11-12T17:44:07.639484914Z" level=info msg="StartContainer for \"6877b6f41c4b5fa9c9e0f3d75457f4d9a6d7257c34f88ac91b1be9bd0920eeab\" returns successfully" Nov 12 17:44:16.116269 kubelet[3383]: E1112 17:44:16.116053 3383 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.255:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-255?timeout=10s\": context deadline exceeded"