Dec 13 08:56:12.895945 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 08:56:12.895969 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 08:56:12.895980 kernel: KASLR enabled Dec 13 08:56:12.895985 kernel: efi: EFI v2.7 by EDK II Dec 13 08:56:12.897331 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4d698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Dec 13 08:56:12.897380 kernel: random: crng init done Dec 13 08:56:12.897391 kernel: ACPI: Early table checksum verification disabled Dec 13 08:56:12.897397 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Dec 13 08:56:12.897404 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Dec 13 08:56:12.897410 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897422 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897428 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897434 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897440 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897448 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897456 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897463 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897495 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 08:56:12.897502 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 08:56:12.897509 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 13 08:56:12.897515 kernel: NUMA: Failed to initialise from firmware Dec 13 08:56:12.897521 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 08:56:12.897542 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Dec 13 08:56:12.897550 kernel: Zone ranges: Dec 13 08:56:12.897557 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 08:56:12.897563 kernel: DMA32 empty Dec 13 08:56:12.897572 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Dec 13 08:56:12.897579 kernel: Movable zone start for each node Dec 13 08:56:12.897585 kernel: Early memory node ranges Dec 13 08:56:12.897591 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Dec 13 08:56:12.897614 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Dec 13 08:56:12.897621 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Dec 13 08:56:12.897628 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Dec 13 08:56:12.897634 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Dec 13 08:56:12.897641 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Dec 13 08:56:12.897663 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Dec 13 08:56:12.897671 kernel: psci: probing for conduit method from ACPI. Dec 13 08:56:12.897681 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 08:56:12.897687 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 08:56:12.897694 kernel: psci: Trusted OS migration not required Dec 13 08:56:12.897719 kernel: psci: SMC Calling Convention v1.1 Dec 13 08:56:12.897729 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 08:56:12.897736 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 08:56:12.897746 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 08:56:12.897753 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 08:56:12.897760 kernel: Detected PIPT I-cache on CPU0 Dec 13 08:56:12.897766 kernel: CPU features: detected: GIC system register CPU interface Dec 13 08:56:12.897773 kernel: CPU features: detected: Hardware dirty bit management Dec 13 08:56:12.897780 kernel: CPU features: detected: Spectre-v4 Dec 13 08:56:12.897787 kernel: CPU features: detected: Spectre-BHB Dec 13 08:56:12.897793 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 08:56:12.897800 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 08:56:12.897807 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 08:56:12.897814 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 08:56:12.897823 kernel: alternatives: applying boot alternatives Dec 13 08:56:12.897831 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 08:56:12.897839 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 08:56:12.897845 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 08:56:12.897852 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 08:56:12.897859 kernel: Fallback order for Node 0: 0 Dec 13 08:56:12.897866 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Dec 13 08:56:12.897872 kernel: Policy zone: Normal Dec 13 08:56:12.897879 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 08:56:12.897886 kernel: software IO TLB: area num 2. Dec 13 08:56:12.897893 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Dec 13 08:56:12.897902 kernel: Memory: 3881592K/4096000K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 214408K reserved, 0K cma-reserved) Dec 13 08:56:12.897909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 08:56:12.897916 kernel: trace event string verifier disabled Dec 13 08:56:12.897923 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 08:56:12.897987 kernel: rcu: RCU event tracing is enabled. Dec 13 08:56:12.898008 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 08:56:12.898043 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 08:56:12.898050 kernel: Tracing variant of Tasks RCU enabled. Dec 13 08:56:12.898057 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 08:56:12.898064 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 08:56:12.898071 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 08:56:12.898081 kernel: GICv3: 256 SPIs implemented Dec 13 08:56:12.898088 kernel: GICv3: 0 Extended SPIs implemented Dec 13 08:56:12.898095 kernel: Root IRQ handler: gic_handle_irq Dec 13 08:56:12.898101 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 08:56:12.898108 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 08:56:12.898144 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 08:56:12.898156 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 08:56:12.898173 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 08:56:12.898180 kernel: GICv3: using LPI property table @0x00000001000e0000 Dec 13 08:56:12.898187 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Dec 13 08:56:12.898194 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 08:56:12.898203 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 08:56:12.898211 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 08:56:12.898218 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 08:56:12.898225 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 08:56:12.898232 kernel: Console: colour dummy device 80x25 Dec 13 08:56:12.898239 kernel: ACPI: Core revision 20230628 Dec 13 08:56:12.898246 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 08:56:12.898254 kernel: pid_max: default: 32768 minimum: 301 Dec 13 08:56:12.898261 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 08:56:12.898268 kernel: landlock: Up and running. Dec 13 08:56:12.898276 kernel: SELinux: Initializing. Dec 13 08:56:12.898283 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 08:56:12.898291 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 08:56:12.898298 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:56:12.898305 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 08:56:12.898312 kernel: rcu: Hierarchical SRCU implementation. Dec 13 08:56:12.898319 kernel: rcu: Max phase no-delay instances is 400. Dec 13 08:56:12.898326 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 08:56:12.898358 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 08:56:12.898367 kernel: Remapping and enabling EFI services. Dec 13 08:56:12.898375 kernel: smp: Bringing up secondary CPUs ... Dec 13 08:56:12.898382 kernel: Detected PIPT I-cache on CPU1 Dec 13 08:56:12.898389 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 08:56:12.898396 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Dec 13 08:56:12.898403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 08:56:12.898410 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 08:56:12.898417 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 08:56:12.898424 kernel: SMP: Total of 2 processors activated. Dec 13 08:56:12.898431 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 08:56:12.898439 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 08:56:12.898447 kernel: CPU features: detected: Common not Private translations Dec 13 08:56:12.898459 kernel: CPU features: detected: CRC32 instructions Dec 13 08:56:12.898468 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 08:56:12.898476 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 08:56:12.898483 kernel: CPU features: detected: LSE atomic instructions Dec 13 08:56:12.898491 kernel: CPU features: detected: Privileged Access Never Dec 13 08:56:12.898515 kernel: CPU features: detected: RAS Extension Support Dec 13 08:56:12.898525 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 08:56:12.898536 kernel: CPU: All CPU(s) started at EL1 Dec 13 08:56:12.898544 kernel: alternatives: applying system-wide alternatives Dec 13 08:56:12.898551 kernel: devtmpfs: initialized Dec 13 08:56:12.898558 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 08:56:12.898566 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 08:56:12.898574 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 08:56:12.898581 kernel: SMBIOS 3.0.0 present. Dec 13 08:56:12.898590 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Dec 13 08:56:12.898597 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 08:56:12.898605 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 08:56:12.898612 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 08:56:12.898620 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 08:56:12.898627 kernel: audit: initializing netlink subsys (disabled) Dec 13 08:56:12.898635 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Dec 13 08:56:12.898642 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 08:56:12.898649 kernel: cpuidle: using governor menu Dec 13 08:56:12.898658 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 08:56:12.898666 kernel: ASID allocator initialised with 32768 entries Dec 13 08:56:12.898673 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 08:56:12.898680 kernel: Serial: AMBA PL011 UART driver Dec 13 08:56:12.898688 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 08:56:12.898695 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 08:56:12.898702 kernel: Modules: 509040 pages in range for PLT usage Dec 13 08:56:12.898710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 08:56:12.898717 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 08:56:12.898726 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 08:56:12.898733 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 08:56:12.898740 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 08:56:12.898748 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 08:56:12.898755 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 08:56:12.898763 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 08:56:12.898770 kernel: ACPI: Added _OSI(Module Device) Dec 13 08:56:12.898777 kernel: ACPI: Added _OSI(Processor Device) Dec 13 08:56:12.898784 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 08:56:12.898792 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 08:56:12.898801 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 08:56:12.898808 kernel: ACPI: Interpreter enabled Dec 13 08:56:12.898815 kernel: ACPI: Using GIC for interrupt routing Dec 13 08:56:12.898823 kernel: ACPI: MCFG table detected, 1 entries Dec 13 08:56:12.898830 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 08:56:12.898837 kernel: printk: console [ttyAMA0] enabled Dec 13 08:56:12.898845 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 08:56:12.899735 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 08:56:12.901190 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 08:56:12.901279 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 08:56:12.901347 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 08:56:12.901411 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 08:56:12.901421 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 08:56:12.901428 kernel: PCI host bridge to bus 0000:00 Dec 13 08:56:12.901503 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 08:56:12.901572 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 08:56:12.901631 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 08:56:12.901690 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 08:56:12.901772 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 08:56:12.901851 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Dec 13 08:56:12.901918 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Dec 13 08:56:12.901987 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 08:56:12.902082 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902151 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Dec 13 08:56:12.902280 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902349 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Dec 13 08:56:12.902424 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902493 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Dec 13 08:56:12.902577 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902655 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Dec 13 08:56:12.902728 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902794 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Dec 13 08:56:12.902867 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.902934 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Dec 13 08:56:12.906038 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.906260 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Dec 13 08:56:12.906345 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.906414 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Dec 13 08:56:12.906488 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Dec 13 08:56:12.906555 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Dec 13 08:56:12.906640 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Dec 13 08:56:12.906706 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Dec 13 08:56:12.906784 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 08:56:12.906854 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Dec 13 08:56:12.906922 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 08:56:12.906990 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 08:56:12.907087 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 08:56:12.907200 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Dec 13 08:56:12.907288 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Dec 13 08:56:12.907369 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Dec 13 08:56:12.907439 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Dec 13 08:56:12.907520 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Dec 13 08:56:12.907589 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Dec 13 08:56:12.907673 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 08:56:12.907742 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Dec 13 08:56:12.907817 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Dec 13 08:56:12.907884 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Dec 13 08:56:12.907954 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 08:56:12.909609 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Dec 13 08:56:12.909717 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Dec 13 08:56:12.909871 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Dec 13 08:56:12.909946 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Dec 13 08:56:12.910032 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 13 08:56:12.910103 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 13 08:56:12.910207 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 13 08:56:12.910288 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 13 08:56:12.910361 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 13 08:56:12.910425 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 13 08:56:12.910494 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 08:56:12.910558 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 13 08:56:12.910622 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 13 08:56:12.910695 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 08:56:12.910759 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 13 08:56:12.910826 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 13 08:56:12.910944 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 08:56:12.912135 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 13 08:56:12.912281 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Dec 13 08:56:12.912488 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 08:56:12.912589 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 13 08:56:12.912690 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 13 08:56:12.912803 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 08:56:12.912929 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Dec 13 08:56:12.913502 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Dec 13 08:56:12.913622 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 08:56:12.913690 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 13 08:56:12.913756 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 13 08:56:12.913827 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 08:56:12.913895 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 13 08:56:12.913998 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 13 08:56:12.914096 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Dec 13 08:56:12.914177 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:12.914253 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Dec 13 08:56:12.914319 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:12.914388 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Dec 13 08:56:12.914454 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:12.914537 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Dec 13 08:56:12.914604 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:12.914673 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Dec 13 08:56:12.914796 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:12.914873 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Dec 13 08:56:12.914943 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:12.915261 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Dec 13 08:56:12.915391 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:12.915460 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Dec 13 08:56:12.915525 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:12.915591 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Dec 13 08:56:12.915656 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:12.915726 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Dec 13 08:56:12.915793 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Dec 13 08:56:12.915858 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Dec 13 08:56:12.915923 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 08:56:12.915988 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Dec 13 08:56:12.916207 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 08:56:12.916288 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Dec 13 08:56:12.916353 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 08:56:12.916423 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Dec 13 08:56:12.916494 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 08:56:12.916560 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Dec 13 08:56:12.916624 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 08:56:12.916692 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Dec 13 08:56:12.916757 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 08:56:12.916823 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Dec 13 08:56:12.916887 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 08:56:12.916952 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Dec 13 08:56:12.917076 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 08:56:12.917148 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Dec 13 08:56:12.917282 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Dec 13 08:56:12.917358 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Dec 13 08:56:12.917452 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Dec 13 08:56:12.917523 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 08:56:12.917590 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Dec 13 08:56:12.917654 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Dec 13 08:56:12.917723 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 08:56:12.917787 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Dec 13 08:56:12.917850 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:12.917920 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Dec 13 08:56:12.917985 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Dec 13 08:56:12.918075 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 08:56:12.918143 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Dec 13 08:56:12.918223 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:12.918301 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Dec 13 08:56:12.918385 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Dec 13 08:56:12.918471 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Dec 13 08:56:12.918539 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 08:56:12.918604 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Dec 13 08:56:12.918672 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:12.918747 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Dec 13 08:56:12.918814 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Dec 13 08:56:12.918879 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 08:56:12.918944 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Dec 13 08:56:12.919009 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:12.920671 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Dec 13 08:56:12.920754 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Dec 13 08:56:12.920819 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 08:56:12.920885 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Dec 13 08:56:12.920949 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:12.921101 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Dec 13 08:56:12.921193 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Dec 13 08:56:12.921264 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Dec 13 08:56:12.921329 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 08:56:12.921393 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 13 08:56:12.921462 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:12.921537 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Dec 13 08:56:12.921606 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Dec 13 08:56:12.921677 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Dec 13 08:56:12.921744 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Dec 13 08:56:12.921809 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 08:56:12.922142 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 13 08:56:12.922239 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:12.922309 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Dec 13 08:56:12.922373 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 08:56:12.922437 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 13 08:56:12.922501 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:12.922569 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Dec 13 08:56:12.922634 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Dec 13 08:56:12.922697 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Dec 13 08:56:12.922764 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:12.922832 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 08:56:12.922890 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 08:56:12.922949 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 08:56:12.923035 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 08:56:12.923103 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 13 08:56:12.923202 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 13 08:56:12.923288 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Dec 13 08:56:12.923360 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 13 08:56:12.923423 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 13 08:56:12.923497 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Dec 13 08:56:12.923559 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 13 08:56:12.923619 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 13 08:56:12.923687 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 08:56:12.923752 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 13 08:56:12.923815 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 13 08:56:12.923898 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Dec 13 08:56:12.923962 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 13 08:56:12.926187 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 13 08:56:12.926306 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Dec 13 08:56:12.926459 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 13 08:56:12.926578 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 13 08:56:12.926659 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Dec 13 08:56:12.926723 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 13 08:56:12.926826 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 13 08:56:12.926938 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Dec 13 08:56:12.927061 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 13 08:56:12.927204 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 13 08:56:12.927440 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Dec 13 08:56:12.927572 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 13 08:56:12.927641 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 13 08:56:12.927658 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 08:56:12.927667 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 08:56:12.927676 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 08:56:12.927684 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 08:56:12.927692 kernel: iommu: Default domain type: Translated Dec 13 08:56:12.927700 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 08:56:12.927708 kernel: efivars: Registered efivars operations Dec 13 08:56:12.927715 kernel: vgaarb: loaded Dec 13 08:56:12.927723 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 08:56:12.927733 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 08:56:12.927741 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 08:56:12.927750 kernel: pnp: PnP ACPI init Dec 13 08:56:12.927829 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 08:56:12.927841 kernel: pnp: PnP ACPI: found 1 devices Dec 13 08:56:12.927849 kernel: NET: Registered PF_INET protocol family Dec 13 08:56:12.927857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 08:56:12.927864 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 08:56:12.927875 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 08:56:12.927883 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 08:56:12.927891 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 08:56:12.927899 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 08:56:12.927907 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 08:56:12.927914 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 08:56:12.927922 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 08:56:12.928000 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 13 08:56:12.930050 kernel: PCI: CLS 0 bytes, default 64 Dec 13 08:56:12.930082 kernel: kvm [1]: HYP mode not available Dec 13 08:56:12.930091 kernel: Initialise system trusted keyrings Dec 13 08:56:12.930100 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 08:56:12.930107 kernel: Key type asymmetric registered Dec 13 08:56:12.930115 kernel: Asymmetric key parser 'x509' registered Dec 13 08:56:12.930123 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 08:56:12.930130 kernel: io scheduler mq-deadline registered Dec 13 08:56:12.930138 kernel: io scheduler kyber registered Dec 13 08:56:12.930147 kernel: io scheduler bfq registered Dec 13 08:56:12.930200 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 08:56:12.930341 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Dec 13 08:56:12.930414 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Dec 13 08:56:12.930485 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.930558 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Dec 13 08:56:12.930625 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Dec 13 08:56:12.930692 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.930768 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Dec 13 08:56:12.930835 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Dec 13 08:56:12.930900 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.930971 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Dec 13 08:56:12.931071 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Dec 13 08:56:12.931141 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.931231 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Dec 13 08:56:12.931300 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Dec 13 08:56:12.931376 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.931447 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Dec 13 08:56:12.931513 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Dec 13 08:56:12.931580 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.931653 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Dec 13 08:56:12.931720 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Dec 13 08:56:12.931785 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.931853 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Dec 13 08:56:12.931919 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Dec 13 08:56:12.931985 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.931998 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 13 08:56:12.932315 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Dec 13 08:56:12.932391 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Dec 13 08:56:12.932455 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 08:56:12.932465 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 08:56:12.932473 kernel: ACPI: button: Power Button [PWRB] Dec 13 08:56:12.932481 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 08:56:12.932556 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Dec 13 08:56:12.932628 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 13 08:56:12.932700 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Dec 13 08:56:12.932711 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 08:56:12.932719 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 08:56:12.932786 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Dec 13 08:56:12.932796 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Dec 13 08:56:12.932804 kernel: thunder_xcv, ver 1.0 Dec 13 08:56:12.932814 kernel: thunder_bgx, ver 1.0 Dec 13 08:56:12.932822 kernel: nicpf, ver 1.0 Dec 13 08:56:12.932830 kernel: nicvf, ver 1.0 Dec 13 08:56:12.932917 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 08:56:12.932981 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T08:56:12 UTC (1734080172) Dec 13 08:56:12.932991 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 08:56:12.933000 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 08:56:12.933008 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 08:56:12.933033 kernel: watchdog: Hard watchdog permanently disabled Dec 13 08:56:12.933041 kernel: NET: Registered PF_INET6 protocol family Dec 13 08:56:12.933049 kernel: Segment Routing with IPv6 Dec 13 08:56:12.933057 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 08:56:12.933065 kernel: NET: Registered PF_PACKET protocol family Dec 13 08:56:12.933072 kernel: Key type dns_resolver registered Dec 13 08:56:12.933080 kernel: registered taskstats version 1 Dec 13 08:56:12.933088 kernel: Loading compiled-in X.509 certificates Dec 13 08:56:12.933096 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 08:56:12.933106 kernel: Key type .fscrypt registered Dec 13 08:56:12.933113 kernel: Key type fscrypt-provisioning registered Dec 13 08:56:12.933121 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 08:56:12.933129 kernel: ima: Allocated hash algorithm: sha1 Dec 13 08:56:12.933136 kernel: ima: No architecture policies found Dec 13 08:56:12.933144 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 08:56:12.933152 kernel: clk: Disabling unused clocks Dec 13 08:56:12.933190 kernel: Freeing unused kernel memory: 39360K Dec 13 08:56:12.933199 kernel: Run /init as init process Dec 13 08:56:12.933210 kernel: with arguments: Dec 13 08:56:12.933218 kernel: /init Dec 13 08:56:12.933226 kernel: with environment: Dec 13 08:56:12.933233 kernel: HOME=/ Dec 13 08:56:12.933241 kernel: TERM=linux Dec 13 08:56:12.933248 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 08:56:12.933258 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:56:12.933268 systemd[1]: Detected virtualization kvm. Dec 13 08:56:12.933278 systemd[1]: Detected architecture arm64. Dec 13 08:56:12.933286 systemd[1]: Running in initrd. Dec 13 08:56:12.933293 systemd[1]: No hostname configured, using default hostname. Dec 13 08:56:12.933301 systemd[1]: Hostname set to . Dec 13 08:56:12.933310 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:56:12.933318 systemd[1]: Queued start job for default target initrd.target. Dec 13 08:56:12.933326 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:12.933334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:12.933344 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 08:56:12.933353 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:56:12.933361 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 08:56:12.933369 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 08:56:12.933379 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 08:56:12.933388 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 08:56:12.933397 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:12.933406 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:12.933414 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:56:12.933422 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:56:12.933430 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:56:12.933438 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:56:12.933447 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:56:12.933455 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:56:12.933463 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:56:12.933473 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:56:12.933481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:12.933489 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:12.933498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:12.933506 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:56:12.933516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 08:56:12.933525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:56:12.933533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 08:56:12.933542 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 08:56:12.933551 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:56:12.933560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:56:12.933568 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:12.933577 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 08:56:12.933612 systemd-journald[235]: Collecting audit messages is disabled. Dec 13 08:56:12.933636 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:12.933644 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 08:56:12.933653 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:56:12.933663 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:12.933672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:12.933680 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:56:12.933689 systemd-journald[235]: Journal started Dec 13 08:56:12.933708 systemd-journald[235]: Runtime Journal (/run/log/journal/dfae17f040f2450e8f4b26ea7723b48b) is 8.0M, max 76.5M, 68.5M free. Dec 13 08:56:12.916102 systemd-modules-load[236]: Inserted module 'overlay' Dec 13 08:56:12.937068 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:56:12.940068 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 08:56:12.942054 kernel: Bridge firewalling registered Dec 13 08:56:12.941573 systemd-modules-load[236]: Inserted module 'br_netfilter' Dec 13 08:56:12.943575 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:12.954291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:56:12.963316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:56:12.978244 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:56:12.981094 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:12.982563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:12.989875 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:12.999245 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 08:56:13.000001 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:13.009335 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:56:13.019366 dracut-cmdline[271]: dracut-dracut-053 Dec 13 08:56:13.024145 dracut-cmdline[271]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 08:56:13.047501 systemd-resolved[273]: Positive Trust Anchors: Dec 13 08:56:13.048232 systemd-resolved[273]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:56:13.048268 systemd-resolved[273]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:56:13.057836 systemd-resolved[273]: Defaulting to hostname 'linux'. Dec 13 08:56:13.059474 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:56:13.060687 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:13.109072 kernel: SCSI subsystem initialized Dec 13 08:56:13.114059 kernel: Loading iSCSI transport class v2.0-870. Dec 13 08:56:13.122146 kernel: iscsi: registered transport (tcp) Dec 13 08:56:13.137066 kernel: iscsi: registered transport (qla4xxx) Dec 13 08:56:13.137134 kernel: QLogic iSCSI HBA Driver Dec 13 08:56:13.189094 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 08:56:13.195346 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 08:56:13.219100 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 08:56:13.219213 kernel: device-mapper: uevent: version 1.0.3 Dec 13 08:56:13.219233 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 08:56:13.269070 kernel: raid6: neonx8 gen() 15644 MB/s Dec 13 08:56:13.286058 kernel: raid6: neonx4 gen() 12339 MB/s Dec 13 08:56:13.303082 kernel: raid6: neonx2 gen() 13217 MB/s Dec 13 08:56:13.320087 kernel: raid6: neonx1 gen() 10359 MB/s Dec 13 08:56:13.337068 kernel: raid6: int64x8 gen() 6817 MB/s Dec 13 08:56:13.354071 kernel: raid6: int64x4 gen() 7258 MB/s Dec 13 08:56:13.371127 kernel: raid6: int64x2 gen() 6058 MB/s Dec 13 08:56:13.388113 kernel: raid6: int64x1 gen() 4984 MB/s Dec 13 08:56:13.388201 kernel: raid6: using algorithm neonx8 gen() 15644 MB/s Dec 13 08:56:13.405079 kernel: raid6: .... xor() 11776 MB/s, rmw enabled Dec 13 08:56:13.405193 kernel: raid6: using neon recovery algorithm Dec 13 08:56:13.410190 kernel: xor: measuring software checksum speed Dec 13 08:56:13.410263 kernel: 8regs : 19664 MB/sec Dec 13 08:56:13.410288 kernel: 32regs : 19395 MB/sec Dec 13 08:56:13.410311 kernel: arm64_neon : 26272 MB/sec Dec 13 08:56:13.411055 kernel: xor: using function: arm64_neon (26272 MB/sec) Dec 13 08:56:13.462069 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 08:56:13.477377 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:56:13.484285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:13.497944 systemd-udevd[455]: Using default interface naming scheme 'v255'. Dec 13 08:56:13.501561 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:13.513490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 08:56:13.528838 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Dec 13 08:56:13.570293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:56:13.587401 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:56:13.640074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:13.648606 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 08:56:13.669734 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 08:56:13.672342 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:56:13.672923 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:13.673947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:56:13.681312 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 08:56:13.703999 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:56:13.759350 kernel: scsi host0: Virtio SCSI HBA Dec 13 08:56:13.788971 kernel: ACPI: bus type USB registered Dec 13 08:56:13.789131 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 08:56:13.789184 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Dec 13 08:56:13.790198 kernel: usbcore: registered new interface driver usbfs Dec 13 08:56:13.790222 kernel: usbcore: registered new interface driver hub Dec 13 08:56:13.790233 kernel: usbcore: registered new device driver usb Dec 13 08:56:13.791370 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:56:13.791495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:13.794580 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:13.795121 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:56:13.795291 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:13.797004 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:13.803345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:13.819761 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:13.839281 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 08:56:13.847518 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 13 08:56:13.847632 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 08:56:13.847722 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 13 08:56:13.847804 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 13 08:56:13.847884 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 13 08:56:13.847963 kernel: sr 0:0:0:0: Power-on or device reset occurred Dec 13 08:56:13.851200 kernel: hub 1-0:1.0: USB hub found Dec 13 08:56:13.851353 kernel: hub 1-0:1.0: 4 ports detected Dec 13 08:56:13.851434 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 08:56:13.851576 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Dec 13 08:56:13.851668 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 08:56:13.851679 kernel: hub 2-0:1.0: USB hub found Dec 13 08:56:13.851776 kernel: hub 2-0:1.0: 4 ports detected Dec 13 08:56:13.851868 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Dec 13 08:56:13.840822 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 08:56:13.866083 kernel: sd 0:0:0:1: Power-on or device reset occurred Dec 13 08:56:13.876831 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Dec 13 08:56:13.877265 kernel: sd 0:0:0:1: [sda] Write Protect is off Dec 13 08:56:13.877484 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Dec 13 08:56:13.877674 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 08:56:13.877860 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 08:56:13.877885 kernel: GPT:17805311 != 80003071 Dec 13 08:56:13.877918 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 08:56:13.877941 kernel: GPT:17805311 != 80003071 Dec 13 08:56:13.877962 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 08:56:13.877985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:13.878007 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Dec 13 08:56:13.880436 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:13.919034 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (524) Dec 13 08:56:13.920620 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Dec 13 08:56:13.927035 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (503) Dec 13 08:56:13.939398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Dec 13 08:56:13.948512 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 08:56:13.953243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Dec 13 08:56:13.953846 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Dec 13 08:56:13.960253 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 08:56:13.971202 disk-uuid[575]: Primary Header is updated. Dec 13 08:56:13.971202 disk-uuid[575]: Secondary Entries is updated. Dec 13 08:56:13.971202 disk-uuid[575]: Secondary Header is updated. Dec 13 08:56:13.988095 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:13.996043 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:14.086929 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 08:56:14.327079 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 13 08:56:14.461714 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 13 08:56:14.461767 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 13 08:56:14.464036 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 13 08:56:14.519231 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 13 08:56:14.520299 kernel: usbcore: registered new interface driver usbhid Dec 13 08:56:14.520335 kernel: usbhid: USB HID core driver Dec 13 08:56:14.997096 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 08:56:14.998064 disk-uuid[577]: The operation has completed successfully. Dec 13 08:56:15.050062 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 08:56:15.050818 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 08:56:15.063335 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 08:56:15.066837 sh[591]: Success Dec 13 08:56:15.078112 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 08:56:15.134404 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 08:56:15.142224 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 08:56:15.143733 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 08:56:15.175489 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 08:56:15.175564 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:15.175587 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 08:56:15.176394 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 08:56:15.176444 kernel: BTRFS info (device dm-0): using free space tree Dec 13 08:56:15.182040 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 08:56:15.183959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 08:56:15.185902 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 08:56:15.191326 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 08:56:15.194657 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 08:56:15.208040 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:15.208096 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:15.208107 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:15.213065 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:15.213130 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:15.224847 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 08:56:15.225847 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:15.233068 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 08:56:15.239245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 08:56:15.330220 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:56:15.333669 ignition[685]: Ignition 2.19.0 Dec 13 08:56:15.333846 ignition[685]: Stage: fetch-offline Dec 13 08:56:15.337376 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:56:15.333889 ignition[685]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:15.338422 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:56:15.333898 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:15.334077 ignition[685]: parsed url from cmdline: "" Dec 13 08:56:15.334080 ignition[685]: no config URL provided Dec 13 08:56:15.334084 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:56:15.334092 ignition[685]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:56:15.334097 ignition[685]: failed to fetch config: resource requires networking Dec 13 08:56:15.334433 ignition[685]: Ignition finished successfully Dec 13 08:56:15.368445 systemd-networkd[778]: lo: Link UP Dec 13 08:56:15.369171 systemd-networkd[778]: lo: Gained carrier Dec 13 08:56:15.370919 systemd-networkd[778]: Enumeration completed Dec 13 08:56:15.371162 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:56:15.371815 systemd[1]: Reached target network.target - Network. Dec 13 08:56:15.373564 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:15.373568 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:15.374638 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:15.374641 systemd-networkd[778]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:15.375258 systemd-networkd[778]: eth0: Link UP Dec 13 08:56:15.375261 systemd-networkd[778]: eth0: Gained carrier Dec 13 08:56:15.375268 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:15.380324 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 08:56:15.381094 systemd-networkd[778]: eth1: Link UP Dec 13 08:56:15.381098 systemd-networkd[778]: eth1: Gained carrier Dec 13 08:56:15.381109 systemd-networkd[778]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:15.394722 ignition[781]: Ignition 2.19.0 Dec 13 08:56:15.394735 ignition[781]: Stage: fetch Dec 13 08:56:15.394931 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:15.394941 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:15.395074 ignition[781]: parsed url from cmdline: "" Dec 13 08:56:15.395078 ignition[781]: no config URL provided Dec 13 08:56:15.395083 ignition[781]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 08:56:15.395090 ignition[781]: no config at "/usr/lib/ignition/user.ign" Dec 13 08:56:15.395110 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Dec 13 08:56:15.395806 ignition[781]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Dec 13 08:56:15.415226 systemd-networkd[778]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 08:56:15.433147 systemd-networkd[778]: eth0: DHCPv4 address 138.199.144.99/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 08:56:15.596068 ignition[781]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Dec 13 08:56:15.602779 ignition[781]: GET result: OK Dec 13 08:56:15.603120 ignition[781]: parsing config with SHA512: e92be6f9bf0d78e80d82099c4f889d74a075063b106f945c6cecc19a9059a419225a41bcefa5d230c806926983249b64f4d7c499eedfdd31ce579eb9a4e46536 Dec 13 08:56:15.610384 unknown[781]: fetched base config from "system" Dec 13 08:56:15.610399 unknown[781]: fetched base config from "system" Dec 13 08:56:15.611045 ignition[781]: fetch: fetch complete Dec 13 08:56:15.610406 unknown[781]: fetched user config from "hetzner" Dec 13 08:56:15.611053 ignition[781]: fetch: fetch passed Dec 13 08:56:15.611110 ignition[781]: Ignition finished successfully Dec 13 08:56:15.613405 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 08:56:15.620372 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 08:56:15.635986 ignition[788]: Ignition 2.19.0 Dec 13 08:56:15.636036 ignition[788]: Stage: kargs Dec 13 08:56:15.636335 ignition[788]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:15.636351 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:15.637631 ignition[788]: kargs: kargs passed Dec 13 08:56:15.639154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 08:56:15.637738 ignition[788]: Ignition finished successfully Dec 13 08:56:15.645299 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 08:56:15.660587 ignition[795]: Ignition 2.19.0 Dec 13 08:56:15.660598 ignition[795]: Stage: disks Dec 13 08:56:15.660774 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:15.660784 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:15.661773 ignition[795]: disks: disks passed Dec 13 08:56:15.661822 ignition[795]: Ignition finished successfully Dec 13 08:56:15.664935 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 08:56:15.666106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 08:56:15.666874 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:56:15.668564 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:56:15.670262 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:56:15.671911 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:56:15.677262 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 08:56:15.700085 systemd-fsck[804]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 08:56:15.704010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 08:56:15.711357 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 08:56:15.770102 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 08:56:15.774086 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 08:56:15.775428 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 08:56:15.784223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:56:15.787156 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 08:56:15.789493 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 08:56:15.793166 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 08:56:15.794300 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:56:15.799701 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 08:56:15.804322 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (812) Dec 13 08:56:15.808147 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:15.808217 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:15.811075 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:15.813297 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 08:56:15.820257 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:15.820298 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:15.819603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:56:15.868831 coreos-metadata[814]: Dec 13 08:56:15.868 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Dec 13 08:56:15.870822 coreos-metadata[814]: Dec 13 08:56:15.870 INFO Fetch successful Dec 13 08:56:15.873262 coreos-metadata[814]: Dec 13 08:56:15.871 INFO wrote hostname ci-4081-2-1-0-c10bd8c210 to /sysroot/etc/hostname Dec 13 08:56:15.875074 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 08:56:15.877366 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:56:15.882216 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 08:56:15.888367 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 08:56:15.894314 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 08:56:16.033169 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 08:56:16.043290 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 08:56:16.047942 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 08:56:16.058075 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:16.080616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 08:56:16.085552 ignition[928]: INFO : Ignition 2.19.0 Dec 13 08:56:16.087601 ignition[928]: INFO : Stage: mount Dec 13 08:56:16.087601 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:16.087601 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:16.087601 ignition[928]: INFO : mount: mount passed Dec 13 08:56:16.087601 ignition[928]: INFO : Ignition finished successfully Dec 13 08:56:16.089320 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 08:56:16.096218 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 08:56:16.176590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 08:56:16.192516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 08:56:16.211090 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Dec 13 08:56:16.215252 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 08:56:16.215336 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 08:56:16.215361 kernel: BTRFS info (device sda6): using free space tree Dec 13 08:56:16.221235 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 08:56:16.221322 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 08:56:16.225537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 08:56:16.247966 ignition[957]: INFO : Ignition 2.19.0 Dec 13 08:56:16.247966 ignition[957]: INFO : Stage: files Dec 13 08:56:16.249062 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:16.249062 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:16.252095 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Dec 13 08:56:16.252095 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 08:56:16.252095 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 08:56:16.254560 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 08:56:16.254560 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 08:56:16.256411 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 08:56:16.254873 unknown[957]: wrote ssh authorized keys file for user: core Dec 13 08:56:16.258189 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:56:16.258189 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 08:56:16.258189 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 08:56:16.258189 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 08:56:16.345318 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 08:56:17.185533 systemd-networkd[778]: eth0: Gained IPv6LL Dec 13 08:56:17.185842 systemd-networkd[778]: eth1: Gained IPv6LL Dec 13 08:56:19.178305 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:19.181594 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 08:56:19.834632 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 08:56:21.030348 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 08:56:21.030348 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 08:56:21.032770 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 08:56:21.053973 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:56:21.053973 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 08:56:21.053973 ignition[957]: INFO : files: files passed Dec 13 08:56:21.053973 ignition[957]: INFO : Ignition finished successfully Dec 13 08:56:21.035673 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 08:56:21.045869 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 08:56:21.049511 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 08:56:21.058264 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 08:56:21.058372 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 08:56:21.072693 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:21.072693 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:21.074760 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 08:56:21.077491 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:56:21.078645 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 08:56:21.086404 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 08:56:21.118462 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 08:56:21.118613 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 08:56:21.120741 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 08:56:21.121788 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 08:56:21.122504 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 08:56:21.131311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 08:56:21.147516 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:56:21.156470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 08:56:21.173749 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:21.174598 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:21.175641 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 08:56:21.176546 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 08:56:21.176679 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 08:56:21.179501 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 08:56:21.180594 systemd[1]: Stopped target basic.target - Basic System. Dec 13 08:56:21.181470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 08:56:21.182452 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 08:56:21.183424 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 08:56:21.184424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 08:56:21.185347 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 08:56:21.186402 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 08:56:21.187368 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 08:56:21.188302 systemd[1]: Stopped target swap.target - Swaps. Dec 13 08:56:21.189032 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 08:56:21.189221 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 08:56:21.190350 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:21.191331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:21.192280 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 08:56:21.193285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:21.193962 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 08:56:21.194165 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 08:56:21.195481 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 08:56:21.195652 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 08:56:21.196637 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 08:56:21.196774 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 08:56:21.197648 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 08:56:21.197782 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 08:56:21.208442 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 08:56:21.209072 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 08:56:21.209328 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:21.215327 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 08:56:21.215801 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 08:56:21.215925 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:21.216638 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 08:56:21.216733 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 08:56:21.227555 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 08:56:21.227661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 08:56:21.236007 ignition[1009]: INFO : Ignition 2.19.0 Dec 13 08:56:21.236007 ignition[1009]: INFO : Stage: umount Dec 13 08:56:21.236007 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 08:56:21.236007 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Dec 13 08:56:21.236007 ignition[1009]: INFO : umount: umount passed Dec 13 08:56:21.236007 ignition[1009]: INFO : Ignition finished successfully Dec 13 08:56:21.237182 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 08:56:21.237293 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 08:56:21.239991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 08:56:21.241837 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 08:56:21.241928 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 08:56:21.245917 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 08:56:21.245989 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 08:56:21.247071 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 08:56:21.247153 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 08:56:21.247927 systemd[1]: Stopped target network.target - Network. Dec 13 08:56:21.248705 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 08:56:21.248755 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 08:56:21.249759 systemd[1]: Stopped target paths.target - Path Units. Dec 13 08:56:21.250488 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 08:56:21.254103 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:21.254701 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 08:56:21.255723 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 08:56:21.256711 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 08:56:21.256779 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 08:56:21.257945 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 08:56:21.258004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 08:56:21.259268 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 08:56:21.259349 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 08:56:21.260518 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 08:56:21.260561 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 08:56:21.261739 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 08:56:21.262725 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 08:56:21.263770 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 08:56:21.263860 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 08:56:21.264840 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 08:56:21.264921 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 08:56:21.267121 systemd-networkd[778]: eth1: DHCPv6 lease lost Dec 13 08:56:21.271240 systemd-networkd[778]: eth0: DHCPv6 lease lost Dec 13 08:56:21.273842 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 08:56:21.273968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 08:56:21.277656 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 08:56:21.277790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 08:56:21.280393 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 08:56:21.280464 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:21.290385 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 08:56:21.291387 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 08:56:21.291496 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 08:56:21.294533 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 08:56:21.294634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:21.300204 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 08:56:21.300296 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:21.301444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 08:56:21.301496 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:21.302333 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:21.312833 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 08:56:21.313010 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:21.314764 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 08:56:21.314806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:21.316519 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 08:56:21.316564 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:21.317538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 08:56:21.317588 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 08:56:21.318983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 08:56:21.319053 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 08:56:21.320201 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 08:56:21.320250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 08:56:21.327254 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 08:56:21.328702 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 08:56:21.329408 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:21.331287 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 08:56:21.331352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:21.333740 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 08:56:21.333881 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 08:56:21.335260 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 08:56:21.335356 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 08:56:21.337358 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 08:56:21.350445 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 08:56:21.364776 systemd[1]: Switching root. Dec 13 08:56:21.402627 systemd-journald[235]: Journal stopped Dec 13 08:56:22.504738 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Dec 13 08:56:22.504849 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 08:56:22.504865 kernel: SELinux: policy capability open_perms=1 Dec 13 08:56:22.504875 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 08:56:22.504885 kernel: SELinux: policy capability always_check_network=0 Dec 13 08:56:22.504899 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 08:56:22.504909 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 08:56:22.504920 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 08:56:22.504930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 08:56:22.504948 kernel: audit: type=1403 audit(1734080181.714:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 08:56:22.504960 systemd[1]: Successfully loaded SELinux policy in 34.262ms. Dec 13 08:56:22.504980 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.230ms. Dec 13 08:56:22.504991 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 08:56:22.505001 systemd[1]: Detected virtualization kvm. Dec 13 08:56:22.509135 systemd[1]: Detected architecture arm64. Dec 13 08:56:22.509187 systemd[1]: Detected first boot. Dec 13 08:56:22.509213 systemd[1]: Hostname set to . Dec 13 08:56:22.509229 systemd[1]: Initializing machine ID from VM UUID. Dec 13 08:56:22.509245 zram_generator::config[1069]: No configuration found. Dec 13 08:56:22.509258 systemd[1]: Populated /etc with preset unit settings. Dec 13 08:56:22.509271 systemd[1]: Queued start job for default target multi-user.target. Dec 13 08:56:22.509284 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 08:56:22.509297 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 08:56:22.509310 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 08:56:22.509323 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 08:56:22.509336 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 08:56:22.509347 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 08:56:22.509360 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 08:56:22.509372 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 08:56:22.509384 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 08:56:22.509397 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 08:56:22.509409 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 08:56:22.509422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 08:56:22.509436 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 08:56:22.509453 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 08:56:22.509466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 08:56:22.509479 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 08:56:22.509491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 08:56:22.509502 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 08:56:22.509514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 08:56:22.509532 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 08:56:22.509544 systemd[1]: Reached target slices.target - Slice Units. Dec 13 08:56:22.509556 systemd[1]: Reached target swap.target - Swaps. Dec 13 08:56:22.509568 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 08:56:22.509580 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 08:56:22.509592 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 08:56:22.509604 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 08:56:22.509615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 08:56:22.509631 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 08:56:22.509643 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 08:56:22.509655 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 08:56:22.509666 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 08:56:22.509678 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 08:56:22.509689 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 08:56:22.509701 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 08:56:22.509713 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 08:56:22.509724 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 08:56:22.509738 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 08:56:22.509750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:22.509765 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 08:56:22.509780 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 08:56:22.509792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:22.509803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:56:22.509817 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:22.509829 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 08:56:22.509841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:22.509853 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:56:22.509865 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 08:56:22.509878 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 08:56:22.509889 kernel: fuse: init (API version 7.39) Dec 13 08:56:22.509902 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 08:56:22.509916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 08:56:22.509927 kernel: ACPI: bus type drm_connector registered Dec 13 08:56:22.509938 kernel: loop: module loaded Dec 13 08:56:22.509950 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 08:56:22.509962 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 08:56:22.509974 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 08:56:22.509987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 08:56:22.509998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 08:56:22.510010 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 08:56:22.510104 systemd-journald[1158]: Collecting audit messages is disabled. Dec 13 08:56:22.510136 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 08:56:22.510149 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 08:56:22.510162 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 08:56:22.510175 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 08:56:22.510187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 08:56:22.510199 systemd-journald[1158]: Journal started Dec 13 08:56:22.510226 systemd-journald[1158]: Runtime Journal (/run/log/journal/dfae17f040f2450e8f4b26ea7723b48b) is 8.0M, max 76.5M, 68.5M free. Dec 13 08:56:22.515198 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 08:56:22.513569 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 08:56:22.513741 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 08:56:22.514856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:22.514992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:22.523697 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:56:22.524095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:56:22.526789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:22.526953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:22.528204 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 08:56:22.528444 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 08:56:22.529371 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:22.529645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:22.530807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 08:56:22.532199 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 08:56:22.533960 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 08:56:22.545845 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 08:56:22.553298 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 08:56:22.557300 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 08:56:22.561192 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:56:22.566339 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 08:56:22.575337 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 08:56:22.576170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:22.587267 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 08:56:22.589279 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:22.597257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 08:56:22.598969 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 08:56:22.607469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 08:56:22.608190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 08:56:22.619920 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 08:56:22.624401 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 08:56:22.629187 systemd-journald[1158]: Time spent on flushing to /var/log/journal/dfae17f040f2450e8f4b26ea7723b48b is 31.050ms for 1114 entries. Dec 13 08:56:22.629187 systemd-journald[1158]: System Journal (/var/log/journal/dfae17f040f2450e8f4b26ea7723b48b) is 8.0M, max 584.8M, 576.8M free. Dec 13 08:56:22.669514 systemd-journald[1158]: Received client request to flush runtime journal. Dec 13 08:56:22.668479 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 08:56:22.671667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 08:56:22.677640 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 08:56:22.687854 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 08:56:22.693112 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 13 08:56:22.693130 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Dec 13 08:56:22.699160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 08:56:22.704861 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 08:56:22.710657 udevadm[1220]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 08:56:22.756632 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 08:56:22.764273 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 08:56:22.785190 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 13 08:56:22.785591 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Dec 13 08:56:22.791365 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 08:56:23.151927 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 08:56:23.159276 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 08:56:23.183589 systemd-udevd[1233]: Using default interface naming scheme 'v255'. Dec 13 08:56:23.205889 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 08:56:23.218335 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 08:56:23.234427 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 08:56:23.287300 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Dec 13 08:56:23.292888 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1244) Dec 13 08:56:23.293112 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1244) Dec 13 08:56:23.300929 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 08:56:23.394075 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 08:56:23.395753 systemd-networkd[1243]: lo: Link UP Dec 13 08:56:23.396393 systemd-networkd[1243]: lo: Gained carrier Dec 13 08:56:23.398859 systemd-networkd[1243]: Enumeration completed Dec 13 08:56:23.399613 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 08:56:23.401260 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:23.401266 systemd-networkd[1243]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:23.404369 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:23.404452 systemd-networkd[1243]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 08:56:23.407899 systemd-networkd[1243]: eth0: Link UP Dec 13 08:56:23.407908 systemd-networkd[1243]: eth0: Gained carrier Dec 13 08:56:23.407927 systemd-networkd[1243]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:23.408441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 08:56:23.415085 systemd-networkd[1243]: eth1: Link UP Dec 13 08:56:23.415095 systemd-networkd[1243]: eth1: Gained carrier Dec 13 08:56:23.415159 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:23.418974 systemd-networkd[1243]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 08:56:23.433375 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1242) Dec 13 08:56:23.447170 systemd-networkd[1243]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 08:56:23.478547 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Dec 13 08:56:23.478570 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Dec 13 08:56:23.483475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:23.493240 systemd-networkd[1243]: eth0: DHCPv4 address 138.199.144.99/32, gateway 172.31.1.1 acquired from 172.31.1.1 Dec 13 08:56:23.496783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:23.503205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:23.508693 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:23.509875 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 08:56:23.509916 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 08:56:23.510299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:23.510470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:23.533468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:23.533653 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:23.535523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:23.543822 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:23.547278 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:23.548405 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:23.561252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Dec 13 08:56:23.565534 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Dec 13 08:56:23.565606 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 13 08:56:23.565619 kernel: [drm] features: -context_init Dec 13 08:56:23.567002 kernel: [drm] number of scanouts: 1 Dec 13 08:56:23.567111 kernel: [drm] number of cap sets: 0 Dec 13 08:56:23.567124 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Dec 13 08:56:23.575177 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 08:56:23.583066 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 13 08:56:23.589533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 08:56:23.655775 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 08:56:23.725857 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 08:56:23.735320 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 08:56:23.750040 lvm[1302]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:56:23.778885 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 08:56:23.780925 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 08:56:23.792350 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 08:56:23.799640 lvm[1305]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 08:56:23.826916 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 08:56:23.829326 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 08:56:23.830414 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 08:56:23.830465 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 08:56:23.831825 systemd[1]: Reached target machines.target - Containers. Dec 13 08:56:23.834340 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 08:56:23.843127 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 08:56:23.850266 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 08:56:23.852375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:23.854305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 08:56:23.859299 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 08:56:23.863274 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 08:56:23.866930 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 08:56:23.886287 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 08:56:23.898432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 08:56:23.899669 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 08:56:23.902640 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 08:56:23.927598 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 08:56:23.949041 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 08:56:23.983042 kernel: loop2: detected capacity change from 0 to 194512 Dec 13 08:56:24.018140 kernel: loop3: detected capacity change from 0 to 8 Dec 13 08:56:24.038130 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 08:56:24.050035 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 08:56:24.062067 kernel: loop6: detected capacity change from 0 to 194512 Dec 13 08:56:24.080045 kernel: loop7: detected capacity change from 0 to 8 Dec 13 08:56:24.080343 (sd-merge)[1327]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Dec 13 08:56:24.080800 (sd-merge)[1327]: Merged extensions into '/usr'. Dec 13 08:56:24.085989 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 08:56:24.086007 systemd[1]: Reloading... Dec 13 08:56:24.163632 zram_generator::config[1353]: No configuration found. Dec 13 08:56:24.297340 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 08:56:24.296939 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:56:24.353863 systemd[1]: Reloading finished in 267 ms. Dec 13 08:56:24.372460 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 08:56:24.375464 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 08:56:24.383395 systemd[1]: Starting ensure-sysext.service... Dec 13 08:56:24.388388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 08:56:24.396351 systemd[1]: Reloading requested from client PID 1399 ('systemctl') (unit ensure-sysext.service)... Dec 13 08:56:24.396366 systemd[1]: Reloading... Dec 13 08:56:24.421146 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 08:56:24.421855 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 08:56:24.422713 systemd-tmpfiles[1400]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 08:56:24.423131 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Dec 13 08:56:24.423263 systemd-tmpfiles[1400]: ACLs are not supported, ignoring. Dec 13 08:56:24.426423 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:56:24.426719 systemd-tmpfiles[1400]: Skipping /boot Dec 13 08:56:24.436511 systemd-tmpfiles[1400]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 08:56:24.436523 systemd-tmpfiles[1400]: Skipping /boot Dec 13 08:56:24.482097 zram_generator::config[1435]: No configuration found. Dec 13 08:56:24.595933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:56:24.652123 systemd[1]: Reloading finished in 255 ms. Dec 13 08:56:24.671898 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 08:56:24.673304 systemd-networkd[1243]: eth0: Gained IPv6LL Dec 13 08:56:24.681768 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 08:56:24.704619 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:56:24.711788 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 08:56:24.715491 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 08:56:24.724165 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 08:56:24.739953 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 08:56:24.747822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:24.749822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:24.756307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:24.772703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 08:56:24.775365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:24.776267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 08:56:24.789498 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 08:56:24.790501 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:24.790674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:24.794575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:24.794752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:24.798536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:24.802954 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 08:56:24.811342 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 08:56:24.811542 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 08:56:24.815747 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 08:56:24.823371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 08:56:24.826331 augenrules[1508]: No rules Dec 13 08:56:24.827139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 08:56:24.832300 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 08:56:24.834168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 08:56:24.835717 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:56:24.849863 systemd[1]: Finished ensure-sysext.service. Dec 13 08:56:24.854400 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 08:56:24.855990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 08:56:24.856388 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 08:56:24.864083 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 08:56:24.864258 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 08:56:24.869931 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 08:56:24.871486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 08:56:24.876598 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 08:56:24.876702 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 08:56:24.889562 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 08:56:24.890556 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 08:56:24.892928 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 08:56:24.902799 systemd-resolved[1485]: Positive Trust Anchors: Dec 13 08:56:24.903164 systemd-resolved[1485]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 08:56:24.903200 systemd-resolved[1485]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 08:56:24.908367 systemd-resolved[1485]: Using system hostname 'ci-4081-2-1-0-c10bd8c210'. Dec 13 08:56:24.910687 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 08:56:24.911429 systemd[1]: Reached target network.target - Network. Dec 13 08:56:24.911868 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 08:56:24.913172 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 08:56:24.945194 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 08:56:24.947436 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 08:56:24.948152 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 08:56:24.948764 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 08:56:24.949655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 08:56:24.950351 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 08:56:24.950388 systemd[1]: Reached target paths.target - Path Units. Dec 13 08:56:24.950877 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 08:56:24.951626 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 08:56:24.952337 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 08:56:24.952957 systemd[1]: Reached target timers.target - Timer Units. Dec 13 08:56:24.955133 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 08:56:24.957453 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 08:56:24.959264 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 08:56:24.963868 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 08:56:24.964675 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 08:56:24.965541 systemd[1]: Reached target basic.target - Basic System. Dec 13 08:56:24.966730 systemd[1]: System is tainted: cgroupsv1 Dec 13 08:56:24.966807 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:56:24.966847 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 08:56:24.969236 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 08:56:24.975320 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 08:56:24.980107 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 08:56:24.989334 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 08:56:24.999244 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 08:56:25.000218 jq[1542]: false Dec 13 08:56:24.999785 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 08:56:25.002478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:25.017216 coreos-metadata[1539]: Dec 13 08:56:25.017 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Dec 13 08:56:25.020283 coreos-metadata[1539]: Dec 13 08:56:25.018 INFO Fetch successful Dec 13 08:56:25.020283 coreos-metadata[1539]: Dec 13 08:56:25.018 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Dec 13 08:56:25.020283 coreos-metadata[1539]: Dec 13 08:56:25.019 INFO Fetch successful Dec 13 08:56:25.019887 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 08:56:25.030379 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 08:56:25.034735 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 08:56:25.042407 dbus-daemon[1541]: [system] SELinux support is enabled Dec 13 08:56:25.046814 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Dec 13 08:56:25.052731 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 08:56:25.061278 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 08:56:25.065478 extend-filesystems[1545]: Found loop4 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found loop5 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found loop6 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found loop7 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda1 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda2 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda3 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found usr Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda4 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda6 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda7 Dec 13 08:56:25.065478 extend-filesystems[1545]: Found sda9 Dec 13 08:56:25.065478 extend-filesystems[1545]: Checking size of /dev/sda9 Dec 13 08:56:25.080812 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 08:56:25.082493 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 08:56:25.090302 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 08:56:25.095277 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 08:56:25.096457 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 08:56:25.116891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 08:56:25.117245 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 08:56:25.119125 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 08:56:25.119380 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 08:56:25.123080 extend-filesystems[1545]: Resized partition /dev/sda9 Dec 13 08:56:25.135806 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 08:56:25.139550 extend-filesystems[1586]: resize2fs 1.47.1 (20-May-2024) Dec 13 08:56:25.147971 jq[1573]: true Dec 13 08:56:25.151459 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 08:56:25.151703 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 08:56:25.160099 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Dec 13 08:56:25.197420 (ntainerd)[1593]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 08:56:25.200311 jq[1592]: true Dec 13 08:56:25.238489 tar[1588]: linux-arm64/helm Dec 13 08:56:25.237354 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 08:56:25.237466 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 08:56:25.239135 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 08:56:25.239161 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 08:56:25.261454 update_engine[1570]: I20241213 08:56:25.260877 1570 main.cc:92] Flatcar Update Engine starting Dec 13 08:56:25.265407 update_engine[1570]: I20241213 08:56:25.265339 1570 update_check_scheduler.cc:74] Next update check in 10m59s Dec 13 08:56:25.266078 systemd[1]: Started update-engine.service - Update Engine. Dec 13 08:56:25.285131 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 08:56:25.290777 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 08:56:25.297068 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Dec 13 08:56:25.300734 systemd-logind[1564]: New seat seat0. Dec 13 08:56:25.327399 extend-filesystems[1586]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 08:56:25.327399 extend-filesystems[1586]: old_desc_blocks = 1, new_desc_blocks = 5 Dec 13 08:56:25.327399 extend-filesystems[1586]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Dec 13 08:56:25.318195 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 08:56:25.343247 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:56:25.343617 extend-filesystems[1545]: Resized filesystem in /dev/sda9 Dec 13 08:56:25.343617 extend-filesystems[1545]: Found sr0 Dec 13 08:56:25.333094 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 08:56:25.333110 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 13 08:56:25.364033 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1234) Dec 13 08:56:25.377232 systemd-networkd[1243]: eth1: Gained IPv6LL Dec 13 08:56:25.404822 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 08:56:25.408141 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 08:56:25.408445 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 08:56:25.409581 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 08:56:25.439882 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 08:56:25.456388 systemd[1]: Starting sshkeys.service... Dec 13 08:56:25.499032 containerd[1593]: time="2024-12-13T08:56:25.497863680Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 08:56:25.503453 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 08:56:25.515378 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 08:56:25.539447 systemd-timesyncd[1532]: Contacted time server 129.250.35.250:123 (0.flatcar.pool.ntp.org). Dec 13 08:56:25.539806 systemd-timesyncd[1532]: Initial clock synchronization to Fri 2024-12-13 08:56:25.324171 UTC. Dec 13 08:56:25.573391 coreos-metadata[1646]: Dec 13 08:56:25.573 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Dec 13 08:56:25.576480 coreos-metadata[1646]: Dec 13 08:56:25.575 INFO Fetch successful Dec 13 08:56:25.578651 unknown[1646]: wrote ssh authorized keys file for user: core Dec 13 08:56:25.586071 containerd[1593]: time="2024-12-13T08:56:25.584376640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.594921 containerd[1593]: time="2024-12-13T08:56:25.594813480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:25.594921 containerd[1593]: time="2024-12-13T08:56:25.594854240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 08:56:25.594921 containerd[1593]: time="2024-12-13T08:56:25.594871200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 08:56:25.595193 containerd[1593]: time="2024-12-13T08:56:25.595070440Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 08:56:25.595193 containerd[1593]: time="2024-12-13T08:56:25.595097080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595193 containerd[1593]: time="2024-12-13T08:56:25.595169000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595193 containerd[1593]: time="2024-12-13T08:56:25.595182120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595425 containerd[1593]: time="2024-12-13T08:56:25.595394040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595425 containerd[1593]: time="2024-12-13T08:56:25.595418440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595480 containerd[1593]: time="2024-12-13T08:56:25.595435040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595480 containerd[1593]: time="2024-12-13T08:56:25.595445040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.595535 containerd[1593]: time="2024-12-13T08:56:25.595517200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.596116 containerd[1593]: time="2024-12-13T08:56:25.595713560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 08:56:25.596116 containerd[1593]: time="2024-12-13T08:56:25.595847360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 08:56:25.596116 containerd[1593]: time="2024-12-13T08:56:25.595862680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 08:56:25.596116 containerd[1593]: time="2024-12-13T08:56:25.595933760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 08:56:25.596116 containerd[1593]: time="2024-12-13T08:56:25.595971200Z" level=info msg="metadata content store policy set" policy=shared Dec 13 08:56:25.604482 containerd[1593]: time="2024-12-13T08:56:25.603847520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 08:56:25.604482 containerd[1593]: time="2024-12-13T08:56:25.604289720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 08:56:25.604482 containerd[1593]: time="2024-12-13T08:56:25.604310880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 08:56:25.604612 containerd[1593]: time="2024-12-13T08:56:25.604327040Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 08:56:25.604612 containerd[1593]: time="2024-12-13T08:56:25.604557240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 08:56:25.604748 containerd[1593]: time="2024-12-13T08:56:25.604720000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 08:56:25.607590 containerd[1593]: time="2024-12-13T08:56:25.607179640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 08:56:25.610914 containerd[1593]: time="2024-12-13T08:56:25.610578640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 08:56:25.611550 containerd[1593]: time="2024-12-13T08:56:25.611503480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 08:56:25.612330 containerd[1593]: time="2024-12-13T08:56:25.611548560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 08:56:25.612376 containerd[1593]: time="2024-12-13T08:56:25.612353560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.612396 containerd[1593]: time="2024-12-13T08:56:25.612377800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.612396 containerd[1593]: time="2024-12-13T08:56:25.612392160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612408680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612424960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612438160Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612453920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612465760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612488720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612502680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612522160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612538680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612555040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612571360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612583840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612598080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615119 containerd[1593]: time="2024-12-13T08:56:25.612613800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.614733 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612628880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612642800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612655440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612669000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612685720Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612710280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612723760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612734840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612840880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612858920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612871080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612883160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 08:56:25.615624 containerd[1593]: time="2024-12-13T08:56:25.612893880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615899 containerd[1593]: time="2024-12-13T08:56:25.612906960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 08:56:25.615899 containerd[1593]: time="2024-12-13T08:56:25.612918960Z" level=info msg="NRI interface is disabled by configuration." Dec 13 08:56:25.615899 containerd[1593]: time="2024-12-13T08:56:25.612929560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 08:56:25.615954 containerd[1593]: time="2024-12-13T08:56:25.613455560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 08:56:25.615954 containerd[1593]: time="2024-12-13T08:56:25.613522120Z" level=info msg="Connect containerd service" Dec 13 08:56:25.615954 containerd[1593]: time="2024-12-13T08:56:25.613621560Z" level=info msg="using legacy CRI server" Dec 13 08:56:25.615954 containerd[1593]: time="2024-12-13T08:56:25.613628840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 08:56:25.615954 containerd[1593]: time="2024-12-13T08:56:25.613721000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 08:56:25.622468 containerd[1593]: time="2024-12-13T08:56:25.617176240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:56:25.619500 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 08:56:25.623187 update-ssh-keys[1658]: Updated "/home/core/.ssh/authorized_keys" Dec 13 08:56:25.628159 systemd[1]: Finished sshkeys.service. Dec 13 08:56:25.629988 containerd[1593]: time="2024-12-13T08:56:25.629210120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.625988800Z" level=info msg="Start subscribing containerd event" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631228320Z" level=info msg="Start recovering state" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631317360Z" level=info msg="Start event monitor" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631334040Z" level=info msg="Start snapshots syncer" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631345880Z" level=info msg="Start cni network conf syncer for default" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631357440Z" level=info msg="Start streaming server" Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631557800Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 08:56:25.633428 containerd[1593]: time="2024-12-13T08:56:25.631615960Z" level=info msg="containerd successfully booted in 0.137179s" Dec 13 08:56:25.633319 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 08:56:26.010081 tar[1588]: linux-arm64/LICENSE Dec 13 08:56:26.010081 tar[1588]: linux-arm64/README.md Dec 13 08:56:26.029279 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 08:56:26.196429 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 08:56:26.221085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 08:56:26.226460 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 08:56:26.253664 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 08:56:26.253897 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 08:56:26.271046 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 08:56:26.285276 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:26.286206 (kubelet)[1693]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:26.288541 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 08:56:26.299634 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 08:56:26.307854 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 08:56:26.310266 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 08:56:26.311885 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 08:56:26.313967 systemd[1]: Startup finished in 9.773s (kernel) + 4.633s (userspace) = 14.407s. Dec 13 08:56:26.921490 kubelet[1693]: E1213 08:56:26.921376 1693 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:26.926501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:26.926882 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:56:37.079223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 08:56:37.086393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:37.199573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:37.204566 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:37.271491 kubelet[1723]: E1213 08:56:37.271413 1723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:37.277971 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:37.278328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:56:47.329416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 08:56:47.338390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:47.454267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:47.459533 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:47.520765 kubelet[1745]: E1213 08:56:47.520690 1745 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:47.524215 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:47.524463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:56:57.579711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 08:56:57.589379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:56:57.717296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:56:57.730548 (kubelet)[1766]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:56:57.780241 kubelet[1766]: E1213 08:56:57.780172 1766 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:56:57.783137 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:56:57.783330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:07.829392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 08:57:07.838401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:07.950271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:07.953864 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:08.011080 kubelet[1788]: E1213 08:57:08.010978 1788 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:08.014294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:08.014491 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:10.953156 update_engine[1570]: I20241213 08:57:10.952100 1570 update_attempter.cc:509] Updating boot flags... Dec 13 08:57:11.012045 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1806) Dec 13 08:57:11.065664 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1810) Dec 13 08:57:18.078882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 08:57:18.086358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:18.213366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:18.225863 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:18.277628 kubelet[1827]: E1213 08:57:18.277565 1827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:18.280637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:18.280976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:28.328949 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 08:57:28.338286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:28.459220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:28.463363 (kubelet)[1848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:28.510906 kubelet[1848]: E1213 08:57:28.510851 1848 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:28.514268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:28.514568 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:38.579612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Dec 13 08:57:38.590873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:38.731343 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:38.736737 (kubelet)[1870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:38.794150 kubelet[1870]: E1213 08:57:38.794080 1870 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:38.796916 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:38.797130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:48.828887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Dec 13 08:57:48.834361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:48.947366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:48.951523 (kubelet)[1892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:49.008144 kubelet[1892]: E1213 08:57:49.008079 1892 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:49.010587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:49.010726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:57:59.079008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Dec 13 08:57:59.084295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:57:59.216894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:57:59.228595 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:57:59.283560 kubelet[1913]: E1213 08:57:59.283497 1913 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:57:59.287253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:57:59.287424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:05.464903 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 08:58:05.473631 systemd[1]: Started sshd@0-138.199.144.99:22-139.178.89.65:53198.service - OpenSSH per-connection server daemon (139.178.89.65:53198). Dec 13 08:58:06.454149 sshd[1922]: Accepted publickey for core from 139.178.89.65 port 53198 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:06.456581 sshd[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:06.469302 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 08:58:06.480568 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 08:58:06.484991 systemd-logind[1564]: New session 1 of user core. Dec 13 08:58:06.497042 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 08:58:06.507380 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 08:58:06.511737 (systemd)[1928]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 08:58:06.617965 systemd[1928]: Queued start job for default target default.target. Dec 13 08:58:06.618425 systemd[1928]: Created slice app.slice - User Application Slice. Dec 13 08:58:06.618443 systemd[1928]: Reached target paths.target - Paths. Dec 13 08:58:06.618454 systemd[1928]: Reached target timers.target - Timers. Dec 13 08:58:06.624174 systemd[1928]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 08:58:06.634780 systemd[1928]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 08:58:06.634863 systemd[1928]: Reached target sockets.target - Sockets. Dec 13 08:58:06.634878 systemd[1928]: Reached target basic.target - Basic System. Dec 13 08:58:06.635519 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 08:58:06.636726 systemd[1928]: Reached target default.target - Main User Target. Dec 13 08:58:06.636793 systemd[1928]: Startup finished in 118ms. Dec 13 08:58:06.652646 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 08:58:07.351886 systemd[1]: Started sshd@1-138.199.144.99:22-139.178.89.65:53212.service - OpenSSH per-connection server daemon (139.178.89.65:53212). Dec 13 08:58:08.325866 sshd[1940]: Accepted publickey for core from 139.178.89.65 port 53212 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:08.328299 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:08.334160 systemd-logind[1564]: New session 2 of user core. Dec 13 08:58:08.340539 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 08:58:09.003330 sshd[1940]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:09.009498 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Dec 13 08:58:09.010511 systemd[1]: sshd@1-138.199.144.99:22-139.178.89.65:53212.service: Deactivated successfully. Dec 13 08:58:09.014391 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 08:58:09.015758 systemd-logind[1564]: Removed session 2. Dec 13 08:58:09.171736 systemd[1]: Started sshd@2-138.199.144.99:22-139.178.89.65:51398.service - OpenSSH per-connection server daemon (139.178.89.65:51398). Dec 13 08:58:09.328973 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Dec 13 08:58:09.339259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:09.473361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:09.488797 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:58:09.537087 kubelet[1962]: E1213 08:58:09.537003 1962 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:58:09.540792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:58:09.541120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:10.145726 sshd[1948]: Accepted publickey for core from 139.178.89.65 port 51398 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:10.147594 sshd[1948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:10.154767 systemd-logind[1564]: New session 3 of user core. Dec 13 08:58:10.164509 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 08:58:10.822413 sshd[1948]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:10.828293 systemd[1]: sshd@2-138.199.144.99:22-139.178.89.65:51398.service: Deactivated successfully. Dec 13 08:58:10.830008 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Dec 13 08:58:10.832351 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 08:58:10.833593 systemd-logind[1564]: Removed session 3. Dec 13 08:58:10.985347 systemd[1]: Started sshd@3-138.199.144.99:22-139.178.89.65:51400.service - OpenSSH per-connection server daemon (139.178.89.65:51400). Dec 13 08:58:11.962709 sshd[1977]: Accepted publickey for core from 139.178.89.65 port 51400 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:11.965179 sshd[1977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:11.972473 systemd-logind[1564]: New session 4 of user core. Dec 13 08:58:11.984570 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 08:58:12.640468 sshd[1977]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:12.645507 systemd[1]: sshd@3-138.199.144.99:22-139.178.89.65:51400.service: Deactivated successfully. Dec 13 08:58:12.649568 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 08:58:12.651515 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Dec 13 08:58:12.652865 systemd-logind[1564]: Removed session 4. Dec 13 08:58:12.807444 systemd[1]: Started sshd@4-138.199.144.99:22-139.178.89.65:51416.service - OpenSSH per-connection server daemon (139.178.89.65:51416). Dec 13 08:58:13.795820 sshd[1985]: Accepted publickey for core from 139.178.89.65 port 51416 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:13.797868 sshd[1985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:13.803869 systemd-logind[1564]: New session 5 of user core. Dec 13 08:58:13.816544 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 08:58:14.328714 sudo[1989]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 08:58:14.328999 sudo[1989]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:14.344723 sudo[1989]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:14.505400 sshd[1985]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:14.511343 systemd[1]: sshd@4-138.199.144.99:22-139.178.89.65:51416.service: Deactivated successfully. Dec 13 08:58:14.515224 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 08:58:14.516332 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Dec 13 08:58:14.517676 systemd-logind[1564]: Removed session 5. Dec 13 08:58:14.679452 systemd[1]: Started sshd@5-138.199.144.99:22-139.178.89.65:51432.service - OpenSSH per-connection server daemon (139.178.89.65:51432). Dec 13 08:58:15.659411 sshd[1994]: Accepted publickey for core from 139.178.89.65 port 51432 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:15.663823 sshd[1994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:15.672600 systemd-logind[1564]: New session 6 of user core. Dec 13 08:58:15.678561 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 08:58:16.183236 sudo[1999]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 08:58:16.183532 sudo[1999]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:16.189048 sudo[1999]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:16.197405 sudo[1998]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 08:58:16.197710 sudo[1998]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:16.214298 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 08:58:16.217684 auditctl[2002]: No rules Dec 13 08:58:16.218059 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 08:58:16.218329 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 08:58:16.225597 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 08:58:16.255734 augenrules[2021]: No rules Dec 13 08:58:16.258901 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 08:58:16.260560 sudo[1998]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:16.421317 sshd[1994]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:16.424160 systemd[1]: sshd@5-138.199.144.99:22-139.178.89.65:51432.service: Deactivated successfully. Dec 13 08:58:16.428555 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 08:58:16.429213 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Dec 13 08:58:16.430868 systemd-logind[1564]: Removed session 6. Dec 13 08:58:16.586891 systemd[1]: Started sshd@6-138.199.144.99:22-139.178.89.65:51434.service - OpenSSH per-connection server daemon (139.178.89.65:51434). Dec 13 08:58:17.574260 sshd[2030]: Accepted publickey for core from 139.178.89.65 port 51434 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 08:58:17.576557 sshd[2030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 08:58:17.581942 systemd-logind[1564]: New session 7 of user core. Dec 13 08:58:17.593597 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 08:58:18.092889 sudo[2034]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 08:58:18.093235 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 08:58:18.395641 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 08:58:18.397192 (dockerd)[2049]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 08:58:18.635854 dockerd[2049]: time="2024-12-13T08:58:18.635772689Z" level=info msg="Starting up" Dec 13 08:58:18.724836 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3295838871-merged.mount: Deactivated successfully. Dec 13 08:58:18.746529 dockerd[2049]: time="2024-12-13T08:58:18.746491875Z" level=info msg="Loading containers: start." Dec 13 08:58:18.848063 kernel: Initializing XFRM netlink socket Dec 13 08:58:18.933085 systemd-networkd[1243]: docker0: Link UP Dec 13 08:58:18.951571 dockerd[2049]: time="2024-12-13T08:58:18.951226529Z" level=info msg="Loading containers: done." Dec 13 08:58:18.969123 dockerd[2049]: time="2024-12-13T08:58:18.968608531Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 08:58:18.969123 dockerd[2049]: time="2024-12-13T08:58:18.968731291Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 08:58:18.969123 dockerd[2049]: time="2024-12-13T08:58:18.968852091Z" level=info msg="Daemon has completed initialization" Dec 13 08:58:19.015104 dockerd[2049]: time="2024-12-13T08:58:19.014819323Z" level=info msg="API listen on /run/docker.sock" Dec 13 08:58:19.016608 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 08:58:19.578872 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Dec 13 08:58:19.586282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:19.717411 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:19.727550 (kubelet)[2200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:58:19.792701 kubelet[2200]: E1213 08:58:19.792573 2200 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:58:19.795046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:58:19.795206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:20.157071 containerd[1593]: time="2024-12-13T08:58:20.157009160Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 08:58:20.806465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253856965.mount: Deactivated successfully. Dec 13 08:58:21.752822 containerd[1593]: time="2024-12-13T08:58:21.752731776Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.754719 containerd[1593]: time="2024-12-13T08:58:21.754167500Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201342" Dec 13 08:58:21.757239 containerd[1593]: time="2024-12-13T08:58:21.757194228Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.761252 containerd[1593]: time="2024-12-13T08:58:21.761183238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:21.762740 containerd[1593]: time="2024-12-13T08:58:21.762494282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 1.605417562s" Dec 13 08:58:21.762740 containerd[1593]: time="2024-12-13T08:58:21.762532762Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 08:58:21.782796 containerd[1593]: time="2024-12-13T08:58:21.782761534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 08:58:23.068208 containerd[1593]: time="2024-12-13T08:58:23.068132302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.070007 containerd[1593]: time="2024-12-13T08:58:23.069960987Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381317" Dec 13 08:58:23.073069 containerd[1593]: time="2024-12-13T08:58:23.071801592Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.076334 containerd[1593]: time="2024-12-13T08:58:23.076286084Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.293345949s" Dec 13 08:58:23.076552 containerd[1593]: time="2024-12-13T08:58:23.076532285Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 08:58:23.076744 containerd[1593]: time="2024-12-13T08:58:23.076716445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.104799 containerd[1593]: time="2024-12-13T08:58:23.104751521Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 08:58:23.989502 containerd[1593]: time="2024-12-13T08:58:23.989437443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.991576 containerd[1593]: time="2024-12-13T08:58:23.991501449Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765660" Dec 13 08:58:23.992714 containerd[1593]: time="2024-12-13T08:58:23.991785890Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.996916 containerd[1593]: time="2024-12-13T08:58:23.996842263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:23.998876 containerd[1593]: time="2024-12-13T08:58:23.998534108Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 893.438906ms" Dec 13 08:58:23.998876 containerd[1593]: time="2024-12-13T08:58:23.998592148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 08:58:24.020945 containerd[1593]: time="2024-12-13T08:58:24.020880690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 08:58:25.119383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706452666.mount: Deactivated successfully. Dec 13 08:58:25.439125 containerd[1593]: time="2024-12-13T08:58:25.438860882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:25.440627 containerd[1593]: time="2024-12-13T08:58:25.440526647Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25274003" Dec 13 08:58:25.442571 containerd[1593]: time="2024-12-13T08:58:25.442510652Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:25.444100 containerd[1593]: time="2024-12-13T08:58:25.444058017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:25.445433 containerd[1593]: time="2024-12-13T08:58:25.445297780Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.42436809s" Dec 13 08:58:25.445433 containerd[1593]: time="2024-12-13T08:58:25.445338100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 08:58:25.468835 containerd[1593]: time="2024-12-13T08:58:25.468479885Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 08:58:26.064810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4189253738.mount: Deactivated successfully. Dec 13 08:58:26.670841 containerd[1593]: time="2024-12-13T08:58:26.670778836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:26.673415 containerd[1593]: time="2024-12-13T08:58:26.673366843Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Dec 13 08:58:26.674238 containerd[1593]: time="2024-12-13T08:58:26.674193126Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:26.679600 containerd[1593]: time="2024-12-13T08:58:26.679528421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:26.680970 containerd[1593]: time="2024-12-13T08:58:26.680826745Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.212293099s" Dec 13 08:58:26.680970 containerd[1593]: time="2024-12-13T08:58:26.680868305Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 08:58:26.702789 containerd[1593]: time="2024-12-13T08:58:26.702707728Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 08:58:27.225813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3887913160.mount: Deactivated successfully. Dec 13 08:58:27.232330 containerd[1593]: time="2024-12-13T08:58:27.232222062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:27.233203 containerd[1593]: time="2024-12-13T08:58:27.233150825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Dec 13 08:58:27.234510 containerd[1593]: time="2024-12-13T08:58:27.234341988Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:27.238533 containerd[1593]: time="2024-12-13T08:58:27.238495360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:27.240191 containerd[1593]: time="2024-12-13T08:58:27.239620524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 536.876155ms" Dec 13 08:58:27.240191 containerd[1593]: time="2024-12-13T08:58:27.239660044Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 08:58:27.264332 containerd[1593]: time="2024-12-13T08:58:27.264286636Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 08:58:27.882110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991194265.mount: Deactivated successfully. Dec 13 08:58:29.351429 containerd[1593]: time="2024-12-13T08:58:29.351348626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:29.354076 containerd[1593]: time="2024-12-13T08:58:29.352973751Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Dec 13 08:58:29.354969 containerd[1593]: time="2024-12-13T08:58:29.354898516Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:29.359990 containerd[1593]: time="2024-12-13T08:58:29.359916372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:58:29.361517 containerd[1593]: time="2024-12-13T08:58:29.361264936Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.096677019s" Dec 13 08:58:29.361517 containerd[1593]: time="2024-12-13T08:58:29.361310456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 08:58:29.829148 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Dec 13 08:58:29.840324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:29.964290 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:29.968838 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 08:58:30.017518 kubelet[2424]: E1213 08:58:30.017450 2424 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 08:58:30.019951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 08:58:30.020138 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 08:58:34.844450 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:34.851328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:34.876528 systemd[1]: Reloading requested from client PID 2487 ('systemctl') (unit session-7.scope)... Dec 13 08:58:34.876543 systemd[1]: Reloading... Dec 13 08:58:35.006038 zram_generator::config[2534]: No configuration found. Dec 13 08:58:35.104791 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:58:35.169059 systemd[1]: Reloading finished in 292 ms. Dec 13 08:58:35.220151 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 08:58:35.220413 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 08:58:35.220790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:35.228426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:35.344286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:35.348410 (kubelet)[2587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:58:35.396897 kubelet[2587]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:35.397306 kubelet[2587]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:58:35.397348 kubelet[2587]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:35.397519 kubelet[2587]: I1213 08:58:35.397481 2587 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:58:36.025281 kubelet[2587]: I1213 08:58:36.025245 2587 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:58:36.025281 kubelet[2587]: I1213 08:58:36.025318 2587 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:58:36.025281 kubelet[2587]: I1213 08:58:36.025631 2587 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:58:36.052193 kubelet[2587]: I1213 08:58:36.052149 2587 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:36.052759 kubelet[2587]: E1213 08:58:36.052740 2587 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://138.199.144.99:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.062585 kubelet[2587]: I1213 08:58:36.062552 2587 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:58:36.064764 kubelet[2587]: I1213 08:58:36.064729 2587 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:58:36.065192 kubelet[2587]: I1213 08:58:36.065171 2587 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:58:36.065328 kubelet[2587]: I1213 08:58:36.065316 2587 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:58:36.065398 kubelet[2587]: I1213 08:58:36.065389 2587 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:58:36.066941 kubelet[2587]: I1213 08:58:36.066919 2587 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:36.069843 kubelet[2587]: I1213 08:58:36.069815 2587 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:58:36.069943 kubelet[2587]: I1213 08:58:36.069935 2587 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:58:36.070124 kubelet[2587]: I1213 08:58:36.070110 2587 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:58:36.070211 kubelet[2587]: I1213 08:58:36.070198 2587 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:58:36.072155 kubelet[2587]: W1213 08:58:36.072077 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.144.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0-c10bd8c210&limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.072232 kubelet[2587]: E1213 08:58:36.072186 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.144.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0-c10bd8c210&limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.073235 kubelet[2587]: W1213 08:58:36.073197 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.144.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.073347 kubelet[2587]: E1213 08:58:36.073335 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.144.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.074453 kubelet[2587]: I1213 08:58:36.074432 2587 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:58:36.075035 kubelet[2587]: I1213 08:58:36.074997 2587 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:58:36.075868 kubelet[2587]: W1213 08:58:36.075843 2587 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 08:58:36.076921 kubelet[2587]: I1213 08:58:36.076890 2587 server.go:1256] "Started kubelet" Dec 13 08:58:36.080526 kubelet[2587]: I1213 08:58:36.080490 2587 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:58:36.081281 kubelet[2587]: I1213 08:58:36.081256 2587 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:58:36.081676 kubelet[2587]: I1213 08:58:36.081658 2587 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:58:36.082042 kubelet[2587]: I1213 08:58:36.081313 2587 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:58:36.084695 kubelet[2587]: I1213 08:58:36.084638 2587 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:58:36.085843 kubelet[2587]: E1213 08:58:36.085605 2587 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://138.199.144.99:6443/api/v1/namespaces/default/events\": dial tcp 138.199.144.99:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-0-c10bd8c210.1810b0dc23c21f97 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-0-c10bd8c210,UID:ci-4081-2-1-0-c10bd8c210,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-0-c10bd8c210,},FirstTimestamp:2024-12-13 08:58:36.076859287 +0000 UTC m=+0.723574455,LastTimestamp:2024-12-13 08:58:36.076859287 +0000 UTC m=+0.723574455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-0-c10bd8c210,}" Dec 13 08:58:36.089149 kubelet[2587]: E1213 08:58:36.089124 2587 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081-2-1-0-c10bd8c210\" not found" Dec 13 08:58:36.090062 kubelet[2587]: I1213 08:58:36.089565 2587 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:58:36.090062 kubelet[2587]: I1213 08:58:36.089681 2587 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:58:36.090062 kubelet[2587]: I1213 08:58:36.089745 2587 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:58:36.091484 kubelet[2587]: W1213 08:58:36.091429 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.144.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.091616 kubelet[2587]: E1213 08:58:36.091604 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.144.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.092790 kubelet[2587]: E1213 08:58:36.092767 2587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.144.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0-c10bd8c210?timeout=10s\": dial tcp 138.199.144.99:6443: connect: connection refused" interval="200ms" Dec 13 08:58:36.092996 kubelet[2587]: E1213 08:58:36.092964 2587 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 08:58:36.095846 kubelet[2587]: I1213 08:58:36.095801 2587 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:58:36.095846 kubelet[2587]: I1213 08:58:36.095827 2587 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:58:36.095998 kubelet[2587]: I1213 08:58:36.095919 2587 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:58:36.116189 kubelet[2587]: I1213 08:58:36.116159 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:58:36.119154 kubelet[2587]: I1213 08:58:36.119130 2587 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:58:36.119286 kubelet[2587]: I1213 08:58:36.119277 2587 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:58:36.119349 kubelet[2587]: I1213 08:58:36.119342 2587 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:58:36.119442 kubelet[2587]: E1213 08:58:36.119433 2587 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:58:36.126530 kubelet[2587]: I1213 08:58:36.126411 2587 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:58:36.126530 kubelet[2587]: I1213 08:58:36.126434 2587 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:58:36.126530 kubelet[2587]: I1213 08:58:36.126450 2587 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:36.129726 kubelet[2587]: W1213 08:58:36.129541 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://138.199.144.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.129726 kubelet[2587]: E1213 08:58:36.129579 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://138.199.144.99:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.130296 kubelet[2587]: I1213 08:58:36.130262 2587 policy_none.go:49] "None policy: Start" Dec 13 08:58:36.131169 kubelet[2587]: I1213 08:58:36.131145 2587 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:58:36.131684 kubelet[2587]: I1213 08:58:36.131364 2587 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:58:36.137062 kubelet[2587]: I1213 08:58:36.136866 2587 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:58:36.137470 kubelet[2587]: I1213 08:58:36.137449 2587 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:58:36.140565 kubelet[2587]: E1213 08:58:36.140545 2587 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-0-c10bd8c210\" not found" Dec 13 08:58:36.192547 kubelet[2587]: I1213 08:58:36.192501 2587 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.193211 kubelet[2587]: E1213 08:58:36.193189 2587 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.144.99:6443/api/v1/nodes\": dial tcp 138.199.144.99:6443: connect: connection refused" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.220713 kubelet[2587]: I1213 08:58:36.220359 2587 topology_manager.go:215] "Topology Admit Handler" podUID="0ba06dcebb93e548c2aed916ad9e7d67" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.222924 kubelet[2587]: I1213 08:58:36.222558 2587 topology_manager.go:215] "Topology Admit Handler" podUID="2e39d3493de3bfc523d97b1bf0d1033a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.225314 kubelet[2587]: I1213 08:58:36.225059 2587 topology_manager.go:215] "Topology Admit Handler" podUID="3e56978806b821155aee68497f231402" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.294592 kubelet[2587]: E1213 08:58:36.294412 2587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.144.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0-c10bd8c210?timeout=10s\": dial tcp 138.199.144.99:6443: connect: connection refused" interval="400ms" Dec 13 08:58:36.391216 kubelet[2587]: I1213 08:58:36.391103 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391216 kubelet[2587]: I1213 08:58:36.391208 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391471 kubelet[2587]: I1213 08:58:36.391265 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391471 kubelet[2587]: I1213 08:58:36.391318 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391471 kubelet[2587]: I1213 08:58:36.391383 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391471 kubelet[2587]: I1213 08:58:36.391436 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e56978806b821155aee68497f231402-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-0-c10bd8c210\" (UID: \"3e56978806b821155aee68497f231402\") " pod="kube-system/kube-scheduler-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391693 kubelet[2587]: I1213 08:58:36.391487 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391693 kubelet[2587]: I1213 08:58:36.391541 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.391693 kubelet[2587]: I1213 08:58:36.391594 2587 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.396667 kubelet[2587]: I1213 08:58:36.396619 2587 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.397512 kubelet[2587]: E1213 08:58:36.397479 2587 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.144.99:6443/api/v1/nodes\": dial tcp 138.199.144.99:6443: connect: connection refused" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.529450 containerd[1593]: time="2024-12-13T08:58:36.529396547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-0-c10bd8c210,Uid:0ba06dcebb93e548c2aed916ad9e7d67,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:36.534355 containerd[1593]: time="2024-12-13T08:58:36.534039802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-0-c10bd8c210,Uid:2e39d3493de3bfc523d97b1bf0d1033a,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:36.535569 containerd[1593]: time="2024-12-13T08:58:36.535524207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-0-c10bd8c210,Uid:3e56978806b821155aee68497f231402,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:36.695424 kubelet[2587]: E1213 08:58:36.695366 2587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.144.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0-c10bd8c210?timeout=10s\": dial tcp 138.199.144.99:6443: connect: connection refused" interval="800ms" Dec 13 08:58:36.801102 kubelet[2587]: I1213 08:58:36.801054 2587 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.801645 kubelet[2587]: E1213 08:58:36.801562 2587 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://138.199.144.99:6443/api/v1/nodes\": dial tcp 138.199.144.99:6443: connect: connection refused" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:36.926520 kubelet[2587]: W1213 08:58:36.925987 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://138.199.144.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.926520 kubelet[2587]: E1213 08:58:36.926088 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://138.199.144.99:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.933930 kubelet[2587]: W1213 08:58:36.933836 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://138.199.144.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0-c10bd8c210&limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:36.934308 kubelet[2587]: E1213 08:58:36.934274 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://138.199.144.99:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-0-c10bd8c210&limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:37.023332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2520849135.mount: Deactivated successfully. Dec 13 08:58:37.029341 containerd[1593]: time="2024-12-13T08:58:37.028217561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:37.030482 containerd[1593]: time="2024-12-13T08:58:37.030446488Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Dec 13 08:58:37.033207 containerd[1593]: time="2024-12-13T08:58:37.033145937Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:37.034777 containerd[1593]: time="2024-12-13T08:58:37.034741582Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:37.035914 containerd[1593]: time="2024-12-13T08:58:37.035885586Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:58:37.037503 containerd[1593]: time="2024-12-13T08:58:37.037436631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:37.038162 containerd[1593]: time="2024-12-13T08:58:37.038136874Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 08:58:37.042485 containerd[1593]: time="2024-12-13T08:58:37.042442128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 08:58:37.044696 containerd[1593]: time="2024-12-13T08:58:37.044422255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.806088ms" Dec 13 08:58:37.045228 containerd[1593]: time="2024-12-13T08:58:37.045195657Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.68287ms" Dec 13 08:58:37.047367 containerd[1593]: time="2024-12-13T08:58:37.047317025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.188902ms" Dec 13 08:58:37.173352 containerd[1593]: time="2024-12-13T08:58:37.173244806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:37.173533 containerd[1593]: time="2024-12-13T08:58:37.173336247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:37.173715 containerd[1593]: time="2024-12-13T08:58:37.173404567Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:37.173715 containerd[1593]: time="2024-12-13T08:58:37.173446087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.173715 containerd[1593]: time="2024-12-13T08:58:37.173542847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.173916 containerd[1593]: time="2024-12-13T08:58:37.173640568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:37.174107 containerd[1593]: time="2024-12-13T08:58:37.174043209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.175244 containerd[1593]: time="2024-12-13T08:58:37.175120773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:37.175432 containerd[1593]: time="2024-12-13T08:58:37.175230573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:37.175732 containerd[1593]: time="2024-12-13T08:58:37.175599494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.175976 containerd[1593]: time="2024-12-13T08:58:37.175920615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.177943 containerd[1593]: time="2024-12-13T08:58:37.177777782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:37.257268 containerd[1593]: time="2024-12-13T08:58:37.257172728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-0-c10bd8c210,Uid:2e39d3493de3bfc523d97b1bf0d1033a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d301830883c8e3f3ad7f902aece18cc8f9a17e77faf916284548c4c7d4fba35\"" Dec 13 08:58:37.262185 containerd[1593]: time="2024-12-13T08:58:37.261929704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-0-c10bd8c210,Uid:3e56978806b821155aee68497f231402,Namespace:kube-system,Attempt:0,} returns sandbox id \"22c095cf2dff1f9a96033cbda0f2e33dbc07ba39eed5ad637e845e647795aa3d\"" Dec 13 08:58:37.265356 containerd[1593]: time="2024-12-13T08:58:37.265132114Z" level=info msg="CreateContainer within sandbox \"22c095cf2dff1f9a96033cbda0f2e33dbc07ba39eed5ad637e845e647795aa3d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 08:58:37.265356 containerd[1593]: time="2024-12-13T08:58:37.265232555Z" level=info msg="CreateContainer within sandbox \"7d301830883c8e3f3ad7f902aece18cc8f9a17e77faf916284548c4c7d4fba35\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 08:58:37.275717 containerd[1593]: time="2024-12-13T08:58:37.275200988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-0-c10bd8c210,Uid:0ba06dcebb93e548c2aed916ad9e7d67,Namespace:kube-system,Attempt:0,} returns sandbox id \"63fdc6ea43f5019f9ed6a6d1679c4f00d47364906f894b0e4ecaf8383d028e84\"" Dec 13 08:58:37.281816 containerd[1593]: time="2024-12-13T08:58:37.281772010Z" level=info msg="CreateContainer within sandbox \"63fdc6ea43f5019f9ed6a6d1679c4f00d47364906f894b0e4ecaf8383d028e84\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 08:58:37.293173 containerd[1593]: time="2024-12-13T08:58:37.293113928Z" level=info msg="CreateContainer within sandbox \"22c095cf2dff1f9a96033cbda0f2e33dbc07ba39eed5ad637e845e647795aa3d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60502ddd364b437ad71b6121485d7e9046a17dbe6d1b928b767b008d6c89b62a\"" Dec 13 08:58:37.294536 containerd[1593]: time="2024-12-13T08:58:37.294322972Z" level=info msg="StartContainer for \"60502ddd364b437ad71b6121485d7e9046a17dbe6d1b928b767b008d6c89b62a\"" Dec 13 08:58:37.296137 containerd[1593]: time="2024-12-13T08:58:37.295943498Z" level=info msg="CreateContainer within sandbox \"7d301830883c8e3f3ad7f902aece18cc8f9a17e77faf916284548c4c7d4fba35\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"481cef768d0ffdb730b1b1a5ea9cd142fd178af5729b009a49720b13847c98c9\"" Dec 13 08:58:37.296835 containerd[1593]: time="2024-12-13T08:58:37.296745100Z" level=info msg="StartContainer for \"481cef768d0ffdb730b1b1a5ea9cd142fd178af5729b009a49720b13847c98c9\"" Dec 13 08:58:37.315318 containerd[1593]: time="2024-12-13T08:58:37.315245722Z" level=info msg="CreateContainer within sandbox \"63fdc6ea43f5019f9ed6a6d1679c4f00d47364906f894b0e4ecaf8383d028e84\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e0f9c964c042fde65152d137d1bf61540104bfb787635700dd814f8311c8b52\"" Dec 13 08:58:37.316911 containerd[1593]: time="2024-12-13T08:58:37.316647087Z" level=info msg="StartContainer for \"5e0f9c964c042fde65152d137d1bf61540104bfb787635700dd814f8311c8b52\"" Dec 13 08:58:37.389892 kubelet[2587]: W1213 08:58:37.389814 2587 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://138.199.144.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:37.389892 kubelet[2587]: E1213 08:58:37.389886 2587 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://138.199.144.99:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 138.199.144.99:6443: connect: connection refused Dec 13 08:58:37.399898 containerd[1593]: time="2024-12-13T08:58:37.399599045Z" level=info msg="StartContainer for \"481cef768d0ffdb730b1b1a5ea9cd142fd178af5729b009a49720b13847c98c9\" returns successfully" Dec 13 08:58:37.400665 containerd[1593]: time="2024-12-13T08:58:37.400467808Z" level=info msg="StartContainer for \"60502ddd364b437ad71b6121485d7e9046a17dbe6d1b928b767b008d6c89b62a\" returns successfully" Dec 13 08:58:37.451508 containerd[1593]: time="2024-12-13T08:58:37.451320058Z" level=info msg="StartContainer for \"5e0f9c964c042fde65152d137d1bf61540104bfb787635700dd814f8311c8b52\" returns successfully" Dec 13 08:58:37.496944 kubelet[2587]: E1213 08:58:37.495994 2587 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://138.199.144.99:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-0-c10bd8c210?timeout=10s\": dial tcp 138.199.144.99:6443: connect: connection refused" interval="1.6s" Dec 13 08:58:37.607967 kubelet[2587]: I1213 08:58:37.605867 2587 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:39.986032 kubelet[2587]: I1213 08:58:39.984200 2587 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:40.059923 kubelet[2587]: E1213 08:58:40.059320 2587 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Dec 13 08:58:40.077030 kubelet[2587]: I1213 08:58:40.075548 2587 apiserver.go:52] "Watching apiserver" Dec 13 08:58:40.090099 kubelet[2587]: I1213 08:58:40.090070 2587 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:58:43.070371 systemd[1]: Reloading requested from client PID 2854 ('systemctl') (unit session-7.scope)... Dec 13 08:58:43.070390 systemd[1]: Reloading... Dec 13 08:58:43.181066 zram_generator::config[2897]: No configuration found. Dec 13 08:58:43.288468 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 08:58:43.361790 systemd[1]: Reloading finished in 291 ms. Dec 13 08:58:43.394722 kubelet[2587]: I1213 08:58:43.394329 2587 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:43.394407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:43.412690 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 08:58:43.413553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:43.421459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 08:58:43.560339 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 08:58:43.571478 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 08:58:43.652447 kubelet[2949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:43.652447 kubelet[2949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 08:58:43.652447 kubelet[2949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 08:58:43.652844 kubelet[2949]: I1213 08:58:43.652437 2949 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 08:58:43.660120 kubelet[2949]: I1213 08:58:43.660087 2949 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 08:58:43.660120 kubelet[2949]: I1213 08:58:43.660117 2949 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 08:58:43.660357 kubelet[2949]: I1213 08:58:43.660341 2949 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 08:58:43.663901 kubelet[2949]: I1213 08:58:43.663330 2949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 08:58:43.668766 kubelet[2949]: I1213 08:58:43.668632 2949 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 08:58:43.678597 kubelet[2949]: I1213 08:58:43.678562 2949 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 08:58:43.679268 kubelet[2949]: I1213 08:58:43.679133 2949 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 08:58:43.679370 kubelet[2949]: I1213 08:58:43.679305 2949 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 08:58:43.679370 kubelet[2949]: I1213 08:58:43.679326 2949 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 08:58:43.679370 kubelet[2949]: I1213 08:58:43.679334 2949 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 08:58:43.679370 kubelet[2949]: I1213 08:58:43.679365 2949 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:43.679532 kubelet[2949]: I1213 08:58:43.679512 2949 kubelet.go:396] "Attempting to sync node with API server" Dec 13 08:58:43.679532 kubelet[2949]: I1213 08:58:43.679526 2949 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 08:58:43.679571 kubelet[2949]: I1213 08:58:43.679547 2949 kubelet.go:312] "Adding apiserver pod source" Dec 13 08:58:43.681267 kubelet[2949]: I1213 08:58:43.681228 2949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 08:58:43.698716 kubelet[2949]: I1213 08:58:43.698613 2949 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 08:58:43.700590 kubelet[2949]: I1213 08:58:43.700044 2949 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 08:58:43.700590 kubelet[2949]: I1213 08:58:43.700453 2949 server.go:1256] "Started kubelet" Dec 13 08:58:43.703559 kubelet[2949]: I1213 08:58:43.703456 2949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 08:58:43.708819 kubelet[2949]: I1213 08:58:43.708793 2949 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 08:58:43.709823 kubelet[2949]: I1213 08:58:43.709800 2949 server.go:461] "Adding debug handlers to kubelet server" Dec 13 08:58:43.712463 kubelet[2949]: I1213 08:58:43.711669 2949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 08:58:43.712655 kubelet[2949]: I1213 08:58:43.711835 2949 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 08:58:43.717914 kubelet[2949]: I1213 08:58:43.714422 2949 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 08:58:43.717914 kubelet[2949]: I1213 08:58:43.714512 2949 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 08:58:43.717914 kubelet[2949]: I1213 08:58:43.714639 2949 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 08:58:43.724683 kubelet[2949]: I1213 08:58:43.724649 2949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 08:58:43.727355 kubelet[2949]: I1213 08:58:43.727312 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 08:58:43.730486 kubelet[2949]: I1213 08:58:43.729713 2949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 08:58:43.730486 kubelet[2949]: I1213 08:58:43.729743 2949 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 08:58:43.730486 kubelet[2949]: I1213 08:58:43.729765 2949 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 08:58:43.730486 kubelet[2949]: E1213 08:58:43.729812 2949 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 08:58:43.732402 kubelet[2949]: I1213 08:58:43.732079 2949 factory.go:221] Registration of the containerd container factory successfully Dec 13 08:58:43.732608 kubelet[2949]: I1213 08:58:43.732594 2949 factory.go:221] Registration of the systemd container factory successfully Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810270 2949 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810292 2949 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810318 2949 state_mem.go:36] "Initialized new in-memory state store" Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810497 2949 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810520 2949 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 08:58:43.810771 kubelet[2949]: I1213 08:58:43.810526 2949 policy_none.go:49] "None policy: Start" Dec 13 08:58:43.812309 kubelet[2949]: I1213 08:58:43.812006 2949 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 08:58:43.812309 kubelet[2949]: I1213 08:58:43.812053 2949 state_mem.go:35] "Initializing new in-memory state store" Dec 13 08:58:43.812309 kubelet[2949]: I1213 08:58:43.812291 2949 state_mem.go:75] "Updated machine memory state" Dec 13 08:58:43.813556 kubelet[2949]: I1213 08:58:43.813531 2949 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 08:58:43.813976 kubelet[2949]: I1213 08:58:43.813770 2949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 08:58:43.825066 kubelet[2949]: I1213 08:58:43.825032 2949 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.830412 kubelet[2949]: I1213 08:58:43.830110 2949 topology_manager.go:215] "Topology Admit Handler" podUID="0ba06dcebb93e548c2aed916ad9e7d67" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.830412 kubelet[2949]: I1213 08:58:43.830230 2949 topology_manager.go:215] "Topology Admit Handler" podUID="2e39d3493de3bfc523d97b1bf0d1033a" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.830412 kubelet[2949]: I1213 08:58:43.830299 2949 topology_manager.go:215] "Topology Admit Handler" podUID="3e56978806b821155aee68497f231402" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.848371 kubelet[2949]: E1213 08:58:43.848325 2949 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-0-c10bd8c210\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.848583 kubelet[2949]: E1213 08:58:43.848455 2949 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.853600 kubelet[2949]: I1213 08:58:43.853535 2949 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.856040 kubelet[2949]: I1213 08:58:43.854269 2949 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.917601 kubelet[2949]: I1213 08:58:43.917483 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919227 kubelet[2949]: I1213 08:58:43.919204 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919397 kubelet[2949]: I1213 08:58:43.919384 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919482 kubelet[2949]: I1213 08:58:43.919473 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919553 kubelet[2949]: I1213 08:58:43.919545 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919655 kubelet[2949]: I1213 08:58:43.919636 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e39d3493de3bfc523d97b1bf0d1033a-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-0-c10bd8c210\" (UID: \"2e39d3493de3bfc523d97b1bf0d1033a\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919788 kubelet[2949]: I1213 08:58:43.919779 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3e56978806b821155aee68497f231402-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-0-c10bd8c210\" (UID: \"3e56978806b821155aee68497f231402\") " pod="kube-system/kube-scheduler-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.919903 kubelet[2949]: I1213 08:58:43.919893 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:43.920098 kubelet[2949]: I1213 08:58:43.920085 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0ba06dcebb93e548c2aed916ad9e7d67-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-0-c10bd8c210\" (UID: \"0ba06dcebb93e548c2aed916ad9e7d67\") " pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" Dec 13 08:58:44.683309 kubelet[2949]: I1213 08:58:44.682097 2949 apiserver.go:52] "Watching apiserver" Dec 13 08:58:44.715899 kubelet[2949]: I1213 08:58:44.715836 2949 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 08:58:44.811072 kubelet[2949]: I1213 08:58:44.811029 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-0-c10bd8c210" podStartSLOduration=4.810970689 podStartE2EDuration="4.810970689s" podCreationTimestamp="2024-12-13 08:58:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:44.807941878 +0000 UTC m=+1.229252585" watchObservedRunningTime="2024-12-13 08:58:44.810970689 +0000 UTC m=+1.232281356" Dec 13 08:58:44.880569 kubelet[2949]: I1213 08:58:44.880528 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-0-c10bd8c210" podStartSLOduration=1.8804774979999999 podStartE2EDuration="1.880477498s" podCreationTimestamp="2024-12-13 08:58:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:44.83070712 +0000 UTC m=+1.252017827" watchObservedRunningTime="2024-12-13 08:58:44.880477498 +0000 UTC m=+1.301788205" Dec 13 08:58:44.927793 kubelet[2949]: I1213 08:58:44.927757 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-0-c10bd8c210" podStartSLOduration=2.927714067 podStartE2EDuration="2.927714067s" podCreationTimestamp="2024-12-13 08:58:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:44.881936743 +0000 UTC m=+1.303247450" watchObservedRunningTime="2024-12-13 08:58:44.927714067 +0000 UTC m=+1.349024774" Dec 13 08:58:49.121373 sudo[2034]: pam_unix(sudo:session): session closed for user root Dec 13 08:58:49.281251 sshd[2030]: pam_unix(sshd:session): session closed for user core Dec 13 08:58:49.288696 systemd[1]: sshd@6-138.199.144.99:22-139.178.89.65:51434.service: Deactivated successfully. Dec 13 08:58:49.295465 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 08:58:49.297669 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Dec 13 08:58:49.299629 systemd-logind[1564]: Removed session 7. Dec 13 08:58:57.855509 kubelet[2949]: I1213 08:58:57.855291 2949 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 08:58:57.857463 containerd[1593]: time="2024-12-13T08:58:57.857409720Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 08:58:57.859251 kubelet[2949]: I1213 08:58:57.859217 2949 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 08:58:58.210239 kubelet[2949]: I1213 08:58:58.210036 2949 topology_manager.go:215] "Topology Admit Handler" podUID="52aed96e-5cf4-41db-90aa-1eb824fe0f22" podNamespace="kube-system" podName="kube-proxy-xpmqz" Dec 13 08:58:58.224588 kubelet[2949]: I1213 08:58:58.224318 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szkwj\" (UniqueName: \"kubernetes.io/projected/52aed96e-5cf4-41db-90aa-1eb824fe0f22-kube-api-access-szkwj\") pod \"kube-proxy-xpmqz\" (UID: \"52aed96e-5cf4-41db-90aa-1eb824fe0f22\") " pod="kube-system/kube-proxy-xpmqz" Dec 13 08:58:58.225351 kubelet[2949]: I1213 08:58:58.225293 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/52aed96e-5cf4-41db-90aa-1eb824fe0f22-lib-modules\") pod \"kube-proxy-xpmqz\" (UID: \"52aed96e-5cf4-41db-90aa-1eb824fe0f22\") " pod="kube-system/kube-proxy-xpmqz" Dec 13 08:58:58.225795 kubelet[2949]: I1213 08:58:58.225637 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/52aed96e-5cf4-41db-90aa-1eb824fe0f22-kube-proxy\") pod \"kube-proxy-xpmqz\" (UID: \"52aed96e-5cf4-41db-90aa-1eb824fe0f22\") " pod="kube-system/kube-proxy-xpmqz" Dec 13 08:58:58.226495 kubelet[2949]: I1213 08:58:58.226236 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/52aed96e-5cf4-41db-90aa-1eb824fe0f22-xtables-lock\") pod \"kube-proxy-xpmqz\" (UID: \"52aed96e-5cf4-41db-90aa-1eb824fe0f22\") " pod="kube-system/kube-proxy-xpmqz" Dec 13 08:58:58.287350 kubelet[2949]: I1213 08:58:58.287302 2949 topology_manager.go:215] "Topology Admit Handler" podUID="3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-g89d8" Dec 13 08:58:58.329126 kubelet[2949]: I1213 08:58:58.327396 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpw8h\" (UniqueName: \"kubernetes.io/projected/3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b-kube-api-access-zpw8h\") pod \"tigera-operator-c7ccbd65-g89d8\" (UID: \"3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b\") " pod="tigera-operator/tigera-operator-c7ccbd65-g89d8" Dec 13 08:58:58.329126 kubelet[2949]: I1213 08:58:58.327552 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b-var-lib-calico\") pod \"tigera-operator-c7ccbd65-g89d8\" (UID: \"3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b\") " pod="tigera-operator/tigera-operator-c7ccbd65-g89d8" Dec 13 08:58:58.519099 containerd[1593]: time="2024-12-13T08:58:58.518927779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xpmqz,Uid:52aed96e-5cf4-41db-90aa-1eb824fe0f22,Namespace:kube-system,Attempt:0,}" Dec 13 08:58:58.549459 containerd[1593]: time="2024-12-13T08:58:58.549184418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:58.549459 containerd[1593]: time="2024-12-13T08:58:58.549241178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:58.549459 containerd[1593]: time="2024-12-13T08:58:58.549256498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:58.549459 containerd[1593]: time="2024-12-13T08:58:58.549348298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:58.590307 containerd[1593]: time="2024-12-13T08:58:58.590265338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xpmqz,Uid:52aed96e-5cf4-41db-90aa-1eb824fe0f22,Namespace:kube-system,Attempt:0,} returns sandbox id \"916fc8faab8782b3dbf55d62a77fadbfe20f4c7a28dda0041951a53846c7d4c4\"" Dec 13 08:58:58.594840 containerd[1593]: time="2024-12-13T08:58:58.594790156Z" level=info msg="CreateContainer within sandbox \"916fc8faab8782b3dbf55d62a77fadbfe20f4c7a28dda0041951a53846c7d4c4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 08:58:58.599839 containerd[1593]: time="2024-12-13T08:58:58.599780575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-g89d8,Uid:3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b,Namespace:tigera-operator,Attempt:0,}" Dec 13 08:58:58.618673 containerd[1593]: time="2024-12-13T08:58:58.618299167Z" level=info msg="CreateContainer within sandbox \"916fc8faab8782b3dbf55d62a77fadbfe20f4c7a28dda0041951a53846c7d4c4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e84857c3af7ea8330dea332de5fda50676d4985800b2c1e81c67f886158fd0a1\"" Dec 13 08:58:58.620767 containerd[1593]: time="2024-12-13T08:58:58.620572176Z" level=info msg="StartContainer for \"e84857c3af7ea8330dea332de5fda50676d4985800b2c1e81c67f886158fd0a1\"" Dec 13 08:58:58.632864 containerd[1593]: time="2024-12-13T08:58:58.632746104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:58:58.632864 containerd[1593]: time="2024-12-13T08:58:58.632842504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:58:58.633161 containerd[1593]: time="2024-12-13T08:58:58.633049425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:58.633437 containerd[1593]: time="2024-12-13T08:58:58.633389426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:58:58.690526 containerd[1593]: time="2024-12-13T08:58:58.690485529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-g89d8,Uid:3f8fedc0-98bc-4b9c-8b1d-615ae9580d9b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bd322001336a9b74259a6f2e974bba386fa8206ce5285b0de0af412a4a2e5b47\"" Dec 13 08:58:58.694801 containerd[1593]: time="2024-12-13T08:58:58.694759426Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 08:58:58.708212 containerd[1593]: time="2024-12-13T08:58:58.707658396Z" level=info msg="StartContainer for \"e84857c3af7ea8330dea332de5fda50676d4985800b2c1e81c67f886158fd0a1\" returns successfully" Dec 13 08:59:00.838467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184759947.mount: Deactivated successfully. Dec 13 08:59:01.176039 containerd[1593]: time="2024-12-13T08:59:01.175153731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.177511 containerd[1593]: time="2024-12-13T08:59:01.177397699Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125988" Dec 13 08:59:01.178641 containerd[1593]: time="2024-12-13T08:59:01.178589864Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.182157 containerd[1593]: time="2024-12-13T08:59:01.182094318Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:01.183423 containerd[1593]: time="2024-12-13T08:59:01.182752041Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.487949895s" Dec 13 08:59:01.183423 containerd[1593]: time="2024-12-13T08:59:01.182790241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 08:59:01.186136 containerd[1593]: time="2024-12-13T08:59:01.186103254Z" level=info msg="CreateContainer within sandbox \"bd322001336a9b74259a6f2e974bba386fa8206ce5285b0de0af412a4a2e5b47\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 08:59:01.199135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116000426.mount: Deactivated successfully. Dec 13 08:59:01.204495 containerd[1593]: time="2024-12-13T08:59:01.204200166Z" level=info msg="CreateContainer within sandbox \"bd322001336a9b74259a6f2e974bba386fa8206ce5285b0de0af412a4a2e5b47\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"80d524fbd3d1c87e94f0e50bc0bd45d003cefa335a11d724cdc3621a34adc885\"" Dec 13 08:59:01.207094 containerd[1593]: time="2024-12-13T08:59:01.206934896Z" level=info msg="StartContainer for \"80d524fbd3d1c87e94f0e50bc0bd45d003cefa335a11d724cdc3621a34adc885\"" Dec 13 08:59:01.260780 containerd[1593]: time="2024-12-13T08:59:01.260732389Z" level=info msg="StartContainer for \"80d524fbd3d1c87e94f0e50bc0bd45d003cefa335a11d724cdc3621a34adc885\" returns successfully" Dec 13 08:59:01.850481 kubelet[2949]: I1213 08:59:01.850233 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xpmqz" podStartSLOduration=3.850150241 podStartE2EDuration="3.850150241s" podCreationTimestamp="2024-12-13 08:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:58:58.842282482 +0000 UTC m=+15.263593229" watchObservedRunningTime="2024-12-13 08:59:01.850150241 +0000 UTC m=+18.271460988" Dec 13 08:59:01.853832 kubelet[2949]: I1213 08:59:01.853626 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-g89d8" podStartSLOduration=1.362582748 podStartE2EDuration="3.853046852s" podCreationTimestamp="2024-12-13 08:58:58 +0000 UTC" firstStartedPulling="2024-12-13 08:58:58.692644218 +0000 UTC m=+15.113954925" lastFinishedPulling="2024-12-13 08:59:01.183108322 +0000 UTC m=+17.604419029" observedRunningTime="2024-12-13 08:59:01.848947476 +0000 UTC m=+18.270258223" watchObservedRunningTime="2024-12-13 08:59:01.853046852 +0000 UTC m=+18.274357599" Dec 13 08:59:05.438885 kubelet[2949]: I1213 08:59:05.438843 2949 topology_manager.go:215] "Topology Admit Handler" podUID="1cec0fb8-380e-4cf8-8ea5-15281bb0e819" podNamespace="calico-system" podName="calico-typha-54779dbbd6-kjrcr" Dec 13 08:59:05.479877 kubelet[2949]: I1213 08:59:05.478671 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cec0fb8-380e-4cf8-8ea5-15281bb0e819-tigera-ca-bundle\") pod \"calico-typha-54779dbbd6-kjrcr\" (UID: \"1cec0fb8-380e-4cf8-8ea5-15281bb0e819\") " pod="calico-system/calico-typha-54779dbbd6-kjrcr" Dec 13 08:59:05.479877 kubelet[2949]: I1213 08:59:05.478762 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgcc\" (UniqueName: \"kubernetes.io/projected/1cec0fb8-380e-4cf8-8ea5-15281bb0e819-kube-api-access-rdgcc\") pod \"calico-typha-54779dbbd6-kjrcr\" (UID: \"1cec0fb8-380e-4cf8-8ea5-15281bb0e819\") " pod="calico-system/calico-typha-54779dbbd6-kjrcr" Dec 13 08:59:05.479877 kubelet[2949]: I1213 08:59:05.478852 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1cec0fb8-380e-4cf8-8ea5-15281bb0e819-typha-certs\") pod \"calico-typha-54779dbbd6-kjrcr\" (UID: \"1cec0fb8-380e-4cf8-8ea5-15281bb0e819\") " pod="calico-system/calico-typha-54779dbbd6-kjrcr" Dec 13 08:59:05.564404 kubelet[2949]: I1213 08:59:05.564342 2949 topology_manager.go:215] "Topology Admit Handler" podUID="623cf83f-643f-46af-93dc-c1aed6823c10" podNamespace="calico-system" podName="calico-node-htdcw" Dec 13 08:59:05.680731 kubelet[2949]: I1213 08:59:05.680690 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-var-lib-calico\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.680903 kubelet[2949]: I1213 08:59:05.680748 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-lib-modules\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.680903 kubelet[2949]: I1213 08:59:05.680780 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9mq\" (UniqueName: \"kubernetes.io/projected/623cf83f-643f-46af-93dc-c1aed6823c10-kube-api-access-ch9mq\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.680903 kubelet[2949]: I1213 08:59:05.680826 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-var-run-calico\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.680903 kubelet[2949]: I1213 08:59:05.680853 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-cni-bin-dir\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.680903 kubelet[2949]: I1213 08:59:05.680880 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-cni-net-dir\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681122 kubelet[2949]: I1213 08:59:05.680931 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/623cf83f-643f-46af-93dc-c1aed6823c10-tigera-ca-bundle\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681122 kubelet[2949]: I1213 08:59:05.680960 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/623cf83f-643f-46af-93dc-c1aed6823c10-node-certs\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681122 kubelet[2949]: I1213 08:59:05.680980 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-cni-log-dir\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681122 kubelet[2949]: I1213 08:59:05.681106 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-flexvol-driver-host\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681226 kubelet[2949]: I1213 08:59:05.681134 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-xtables-lock\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.681226 kubelet[2949]: I1213 08:59:05.681158 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/623cf83f-643f-46af-93dc-c1aed6823c10-policysync\") pod \"calico-node-htdcw\" (UID: \"623cf83f-643f-46af-93dc-c1aed6823c10\") " pod="calico-system/calico-node-htdcw" Dec 13 08:59:05.700140 kubelet[2949]: I1213 08:59:05.698664 2949 topology_manager.go:215] "Topology Admit Handler" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" podNamespace="calico-system" podName="csi-node-driver-4blkf" Dec 13 08:59:05.700140 kubelet[2949]: E1213 08:59:05.698943 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:05.753697 containerd[1593]: time="2024-12-13T08:59:05.753556348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54779dbbd6-kjrcr,Uid:1cec0fb8-380e-4cf8-8ea5-15281bb0e819,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:05.783058 kubelet[2949]: I1213 08:59:05.782436 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/68d5070c-72c1-493f-9630-8955fb2d5362-registration-dir\") pod \"csi-node-driver-4blkf\" (UID: \"68d5070c-72c1-493f-9630-8955fb2d5362\") " pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:05.783058 kubelet[2949]: I1213 08:59:05.782487 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h6vk\" (UniqueName: \"kubernetes.io/projected/68d5070c-72c1-493f-9630-8955fb2d5362-kube-api-access-6h6vk\") pod \"csi-node-driver-4blkf\" (UID: \"68d5070c-72c1-493f-9630-8955fb2d5362\") " pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:05.783058 kubelet[2949]: I1213 08:59:05.782527 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/68d5070c-72c1-493f-9630-8955fb2d5362-kubelet-dir\") pod \"csi-node-driver-4blkf\" (UID: \"68d5070c-72c1-493f-9630-8955fb2d5362\") " pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:05.783058 kubelet[2949]: I1213 08:59:05.782698 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/68d5070c-72c1-493f-9630-8955fb2d5362-varrun\") pod \"csi-node-driver-4blkf\" (UID: \"68d5070c-72c1-493f-9630-8955fb2d5362\") " pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:05.783058 kubelet[2949]: I1213 08:59:05.782721 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/68d5070c-72c1-493f-9630-8955fb2d5362-socket-dir\") pod \"csi-node-driver-4blkf\" (UID: \"68d5070c-72c1-493f-9630-8955fb2d5362\") " pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.789178 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.792459 kubelet[2949]: W1213 08:59:05.789217 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.789250 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.789736 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.792459 kubelet[2949]: W1213 08:59:05.789749 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.789871 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.790925 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.792459 kubelet[2949]: W1213 08:59:05.790940 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.790956 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.792459 kubelet[2949]: E1213 08:59:05.792391 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.792851 kubelet[2949]: W1213 08:59:05.792404 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.796193 kubelet[2949]: E1213 08:59:05.793102 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.805431 kubelet[2949]: E1213 08:59:05.805396 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.806129 kubelet[2949]: W1213 08:59:05.806105 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.806486 kubelet[2949]: E1213 08:59:05.806472 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.835547 kubelet[2949]: E1213 08:59:05.835507 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.835547 kubelet[2949]: W1213 08:59:05.835537 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.835783 kubelet[2949]: E1213 08:59:05.835562 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.838252 containerd[1593]: time="2024-12-13T08:59:05.837504366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:05.838252 containerd[1593]: time="2024-12-13T08:59:05.837566086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:05.838252 containerd[1593]: time="2024-12-13T08:59:05.837582366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:05.839599 containerd[1593]: time="2024-12-13T08:59:05.839115972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:05.876897 containerd[1593]: time="2024-12-13T08:59:05.876854124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-htdcw,Uid:623cf83f-643f-46af-93dc-c1aed6823c10,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:05.885322 kubelet[2949]: E1213 08:59:05.885145 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.885322 kubelet[2949]: W1213 08:59:05.885184 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.885322 kubelet[2949]: E1213 08:59:05.885207 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.886124 kubelet[2949]: E1213 08:59:05.885844 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.886124 kubelet[2949]: W1213 08:59:05.885860 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.886124 kubelet[2949]: E1213 08:59:05.885875 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.886665 kubelet[2949]: E1213 08:59:05.886315 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.886665 kubelet[2949]: W1213 08:59:05.886329 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.886665 kubelet[2949]: E1213 08:59:05.886342 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.888220 kubelet[2949]: E1213 08:59:05.887350 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.888220 kubelet[2949]: W1213 08:59:05.887379 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.888220 kubelet[2949]: E1213 08:59:05.888081 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.888537 kubelet[2949]: E1213 08:59:05.888379 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.888537 kubelet[2949]: W1213 08:59:05.888392 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.888537 kubelet[2949]: E1213 08:59:05.888406 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.888700 kubelet[2949]: E1213 08:59:05.888688 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.888772 kubelet[2949]: W1213 08:59:05.888761 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.888851 kubelet[2949]: E1213 08:59:05.888842 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.889770 kubelet[2949]: E1213 08:59:05.889753 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.891277 kubelet[2949]: W1213 08:59:05.890831 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.891277 kubelet[2949]: E1213 08:59:05.891135 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.891277 kubelet[2949]: W1213 08:59:05.891145 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.891462 kubelet[2949]: E1213 08:59:05.891169 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.891462 kubelet[2949]: E1213 08:59:05.891445 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.891717 kubelet[2949]: E1213 08:59:05.891703 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.891807 kubelet[2949]: W1213 08:59:05.891780 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.892538 kubelet[2949]: E1213 08:59:05.892509 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.893310 kubelet[2949]: E1213 08:59:05.893061 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.893310 kubelet[2949]: W1213 08:59:05.893082 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.893913 kubelet[2949]: E1213 08:59:05.893474 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.894288 kubelet[2949]: E1213 08:59:05.894270 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.894392 kubelet[2949]: W1213 08:59:05.894377 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.895006 kubelet[2949]: E1213 08:59:05.894761 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.895527 kubelet[2949]: E1213 08:59:05.895510 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.895737 kubelet[2949]: W1213 08:59:05.895595 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.895737 kubelet[2949]: E1213 08:59:05.895726 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.897675 kubelet[2949]: E1213 08:59:05.897108 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.897675 kubelet[2949]: W1213 08:59:05.897129 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.897675 kubelet[2949]: E1213 08:59:05.897315 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.897675 kubelet[2949]: W1213 08:59:05.897322 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.897675 kubelet[2949]: E1213 08:59:05.897437 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.897675 kubelet[2949]: W1213 08:59:05.897444 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.897675 kubelet[2949]: E1213 08:59:05.897553 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.897675 kubelet[2949]: W1213 08:59:05.897572 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.898553 kubelet[2949]: E1213 08:59:05.898302 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.898553 kubelet[2949]: W1213 08:59:05.898321 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.898553 kubelet[2949]: E1213 08:59:05.898340 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.898553 kubelet[2949]: E1213 08:59:05.898376 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.899153 kubelet[2949]: E1213 08:59:05.899002 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.899153 kubelet[2949]: W1213 08:59:05.899029 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.899530 kubelet[2949]: E1213 08:59:05.899049 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.900116 kubelet[2949]: E1213 08:59:05.900083 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.900186 kubelet[2949]: E1213 08:59:05.900134 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.900186 kubelet[2949]: E1213 08:59:05.900150 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.900474 kubelet[2949]: E1213 08:59:05.900279 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.900474 kubelet[2949]: W1213 08:59:05.900294 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.901074 kubelet[2949]: E1213 08:59:05.900708 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.901329 kubelet[2949]: E1213 08:59:05.901216 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.901476 kubelet[2949]: W1213 08:59:05.901232 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.901476 kubelet[2949]: E1213 08:59:05.901429 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.903099 kubelet[2949]: E1213 08:59:05.902996 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.903099 kubelet[2949]: W1213 08:59:05.903056 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.903099 kubelet[2949]: E1213 08:59:05.903093 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.903583 kubelet[2949]: E1213 08:59:05.903378 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.903943 kubelet[2949]: W1213 08:59:05.903861 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.904475 kubelet[2949]: E1213 08:59:05.904040 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.905228 kubelet[2949]: E1213 08:59:05.905205 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.905228 kubelet[2949]: W1213 08:59:05.905224 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.905428 kubelet[2949]: E1213 08:59:05.905346 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.907894 kubelet[2949]: E1213 08:59:05.907644 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.907894 kubelet[2949]: W1213 08:59:05.907670 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.907894 kubelet[2949]: E1213 08:59:05.907700 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.908475 kubelet[2949]: E1213 08:59:05.908458 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.908669 kubelet[2949]: W1213 08:59:05.908565 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.908669 kubelet[2949]: E1213 08:59:05.908591 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.910398 containerd[1593]: time="2024-12-13T08:59:05.910187538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54779dbbd6-kjrcr,Uid:1cec0fb8-380e-4cf8-8ea5-15281bb0e819,Namespace:calico-system,Attempt:0,} returns sandbox id \"fb01c31134ba808da2e5f9084ff505c6b859daa38b8457d235e408660aaee922\"" Dec 13 08:59:05.913495 containerd[1593]: time="2024-12-13T08:59:05.913235270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 08:59:05.927976 kubelet[2949]: E1213 08:59:05.927866 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:05.927976 kubelet[2949]: W1213 08:59:05.927895 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:05.927976 kubelet[2949]: E1213 08:59:05.927919 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:05.939375 containerd[1593]: time="2024-12-13T08:59:05.939078814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:05.939375 containerd[1593]: time="2024-12-13T08:59:05.939138134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:05.939375 containerd[1593]: time="2024-12-13T08:59:05.939149454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:05.939375 containerd[1593]: time="2024-12-13T08:59:05.939242575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:06.006756 containerd[1593]: time="2024-12-13T08:59:06.006557325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-htdcw,Uid:623cf83f-643f-46af-93dc-c1aed6823c10,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\"" Dec 13 08:59:07.299139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount487679196.mount: Deactivated successfully. Dec 13 08:59:07.685393 containerd[1593]: time="2024-12-13T08:59:07.685324507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:07.686994 containerd[1593]: time="2024-12-13T08:59:07.686901594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 08:59:07.688228 containerd[1593]: time="2024-12-13T08:59:07.688176079Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:07.691755 containerd[1593]: time="2024-12-13T08:59:07.691339252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:07.693080 containerd[1593]: time="2024-12-13T08:59:07.692977458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.779692388s" Dec 13 08:59:07.693181 containerd[1593]: time="2024-12-13T08:59:07.693062899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 08:59:07.694181 containerd[1593]: time="2024-12-13T08:59:07.693911382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 08:59:07.717872 containerd[1593]: time="2024-12-13T08:59:07.717828559Z" level=info msg="CreateContainer within sandbox \"fb01c31134ba808da2e5f9084ff505c6b859daa38b8457d235e408660aaee922\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 08:59:07.735819 kubelet[2949]: E1213 08:59:07.734144 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:07.743275 containerd[1593]: time="2024-12-13T08:59:07.743225902Z" level=info msg="CreateContainer within sandbox \"fb01c31134ba808da2e5f9084ff505c6b859daa38b8457d235e408660aaee922\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ab6f2b136ded480a9b16bac1778dfafa1106b590050eaaac0d5b5366c4881d21\"" Dec 13 08:59:07.744799 containerd[1593]: time="2024-12-13T08:59:07.744748708Z" level=info msg="StartContainer for \"ab6f2b136ded480a9b16bac1778dfafa1106b590050eaaac0d5b5366c4881d21\"" Dec 13 08:59:07.818557 containerd[1593]: time="2024-12-13T08:59:07.818456166Z" level=info msg="StartContainer for \"ab6f2b136ded480a9b16bac1778dfafa1106b590050eaaac0d5b5366c4881d21\" returns successfully" Dec 13 08:59:07.874203 kubelet[2949]: E1213 08:59:07.874077 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.874203 kubelet[2949]: W1213 08:59:07.874098 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.874203 kubelet[2949]: E1213 08:59:07.874120 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.875229 kubelet[2949]: E1213 08:59:07.875096 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.875229 kubelet[2949]: W1213 08:59:07.875114 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.875229 kubelet[2949]: E1213 08:59:07.875133 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.875698 kubelet[2949]: E1213 08:59:07.875573 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.875698 kubelet[2949]: W1213 08:59:07.875586 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.875698 kubelet[2949]: E1213 08:59:07.875601 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.876033 kubelet[2949]: E1213 08:59:07.876001 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.876182 kubelet[2949]: W1213 08:59:07.876107 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.876182 kubelet[2949]: E1213 08:59:07.876127 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.876492 kubelet[2949]: E1213 08:59:07.876480 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.876612 kubelet[2949]: W1213 08:59:07.876556 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.876612 kubelet[2949]: E1213 08:59:07.876573 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.876957 kubelet[2949]: E1213 08:59:07.876857 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.876957 kubelet[2949]: W1213 08:59:07.876869 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.876957 kubelet[2949]: E1213 08:59:07.876882 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.877308 kubelet[2949]: E1213 08:59:07.877235 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.877308 kubelet[2949]: W1213 08:59:07.877247 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.877308 kubelet[2949]: E1213 08:59:07.877265 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.879568 kubelet[2949]: E1213 08:59:07.879377 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.879568 kubelet[2949]: W1213 08:59:07.879431 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.879568 kubelet[2949]: E1213 08:59:07.879504 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.880102 kubelet[2949]: E1213 08:59:07.879973 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.880102 kubelet[2949]: W1213 08:59:07.879994 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.880102 kubelet[2949]: E1213 08:59:07.880010 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.880407 kubelet[2949]: E1213 08:59:07.880343 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.880407 kubelet[2949]: W1213 08:59:07.880353 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.880407 kubelet[2949]: E1213 08:59:07.880366 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.880838 kubelet[2949]: E1213 08:59:07.880718 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.880838 kubelet[2949]: W1213 08:59:07.880731 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.880838 kubelet[2949]: E1213 08:59:07.880744 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.881130 kubelet[2949]: E1213 08:59:07.881054 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.881130 kubelet[2949]: W1213 08:59:07.881066 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.881130 kubelet[2949]: E1213 08:59:07.881081 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.881474 kubelet[2949]: E1213 08:59:07.881402 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.881474 kubelet[2949]: W1213 08:59:07.881414 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.881474 kubelet[2949]: E1213 08:59:07.881427 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.881826 kubelet[2949]: E1213 08:59:07.881740 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.881826 kubelet[2949]: W1213 08:59:07.881752 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.881826 kubelet[2949]: E1213 08:59:07.881765 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.882270 kubelet[2949]: E1213 08:59:07.882189 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.882270 kubelet[2949]: W1213 08:59:07.882202 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.882270 kubelet[2949]: E1213 08:59:07.882218 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.902398 kubelet[2949]: E1213 08:59:07.902321 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.902398 kubelet[2949]: W1213 08:59:07.902347 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.902895 kubelet[2949]: E1213 08:59:07.902374 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.903144 kubelet[2949]: E1213 08:59:07.903121 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.903144 kubelet[2949]: W1213 08:59:07.903141 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.903386 kubelet[2949]: E1213 08:59:07.903166 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.905394 kubelet[2949]: E1213 08:59:07.905150 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.905394 kubelet[2949]: W1213 08:59:07.905177 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.905394 kubelet[2949]: E1213 08:59:07.905215 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.906749 kubelet[2949]: E1213 08:59:07.905948 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.906749 kubelet[2949]: W1213 08:59:07.905967 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.906749 kubelet[2949]: E1213 08:59:07.905994 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.906749 kubelet[2949]: E1213 08:59:07.906306 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.906749 kubelet[2949]: W1213 08:59:07.906316 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.906749 kubelet[2949]: E1213 08:59:07.906329 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.907161 kubelet[2949]: E1213 08:59:07.907047 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.907161 kubelet[2949]: W1213 08:59:07.907063 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.907161 kubelet[2949]: E1213 08:59:07.907101 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.907475 kubelet[2949]: E1213 08:59:07.907282 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.907475 kubelet[2949]: W1213 08:59:07.907291 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.907475 kubelet[2949]: E1213 08:59:07.907306 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.907595 kubelet[2949]: E1213 08:59:07.907583 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.907649 kubelet[2949]: W1213 08:59:07.907637 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.907847 kubelet[2949]: E1213 08:59:07.907832 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.908220 kubelet[2949]: E1213 08:59:07.908203 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.908315 kubelet[2949]: W1213 08:59:07.908301 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.908484 kubelet[2949]: E1213 08:59:07.908376 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.908818 kubelet[2949]: E1213 08:59:07.908701 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.908818 kubelet[2949]: W1213 08:59:07.908718 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.908818 kubelet[2949]: E1213 08:59:07.908787 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.909053 kubelet[2949]: E1213 08:59:07.908997 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.909053 kubelet[2949]: W1213 08:59:07.909011 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.909269 kubelet[2949]: E1213 08:59:07.909159 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.909507 kubelet[2949]: E1213 08:59:07.909404 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.909507 kubelet[2949]: W1213 08:59:07.909416 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.909507 kubelet[2949]: E1213 08:59:07.909436 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.909669 kubelet[2949]: E1213 08:59:07.909656 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.909725 kubelet[2949]: W1213 08:59:07.909714 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.909873 kubelet[2949]: E1213 08:59:07.909858 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.910204 kubelet[2949]: E1213 08:59:07.910182 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.910264 kubelet[2949]: W1213 08:59:07.910207 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.910264 kubelet[2949]: E1213 08:59:07.910240 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.910643 kubelet[2949]: E1213 08:59:07.910526 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.910643 kubelet[2949]: W1213 08:59:07.910550 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.910643 kubelet[2949]: E1213 08:59:07.910575 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.912507 kubelet[2949]: E1213 08:59:07.911158 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.912507 kubelet[2949]: W1213 08:59:07.911178 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.912507 kubelet[2949]: E1213 08:59:07.911202 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.912507 kubelet[2949]: E1213 08:59:07.912510 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.912713 kubelet[2949]: W1213 08:59:07.912524 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.912713 kubelet[2949]: E1213 08:59:07.912556 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:07.912970 kubelet[2949]: E1213 08:59:07.912955 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:07.912970 kubelet[2949]: W1213 08:59:07.912970 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:07.913090 kubelet[2949]: E1213 08:59:07.912990 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.871847 kubelet[2949]: I1213 08:59:08.871812 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:08.889884 kubelet[2949]: E1213 08:59:08.889835 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.889884 kubelet[2949]: W1213 08:59:08.889869 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.890479 kubelet[2949]: E1213 08:59:08.889909 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.890479 kubelet[2949]: E1213 08:59:08.890268 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.890479 kubelet[2949]: W1213 08:59:08.890284 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.890479 kubelet[2949]: E1213 08:59:08.890305 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.890917 kubelet[2949]: E1213 08:59:08.890703 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.890917 kubelet[2949]: W1213 08:59:08.890720 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.890917 kubelet[2949]: E1213 08:59:08.890742 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.891071 kubelet[2949]: E1213 08:59:08.891001 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.891071 kubelet[2949]: W1213 08:59:08.891063 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.891175 kubelet[2949]: E1213 08:59:08.891083 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.891683 kubelet[2949]: E1213 08:59:08.891657 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.891683 kubelet[2949]: W1213 08:59:08.891677 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.891683 kubelet[2949]: E1213 08:59:08.891691 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.892010 kubelet[2949]: E1213 08:59:08.891898 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.892010 kubelet[2949]: W1213 08:59:08.891906 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.892010 kubelet[2949]: E1213 08:59:08.891918 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.892357 kubelet[2949]: E1213 08:59:08.892079 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.892357 kubelet[2949]: W1213 08:59:08.892087 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.892357 kubelet[2949]: E1213 08:59:08.892098 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.892357 kubelet[2949]: E1213 08:59:08.892325 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.892357 kubelet[2949]: W1213 08:59:08.892334 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.892732 kubelet[2949]: E1213 08:59:08.892460 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.892732 kubelet[2949]: E1213 08:59:08.892659 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.892732 kubelet[2949]: W1213 08:59:08.892667 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.892732 kubelet[2949]: E1213 08:59:08.892678 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.892806 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.893202 kubelet[2949]: W1213 08:59:08.892812 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.892826 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.892940 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.893202 kubelet[2949]: W1213 08:59:08.892947 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.892956 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.893093 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.893202 kubelet[2949]: W1213 08:59:08.893099 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.893109 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.893202 kubelet[2949]: E1213 08:59:08.893244 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.894048 kubelet[2949]: W1213 08:59:08.893251 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.894048 kubelet[2949]: E1213 08:59:08.893267 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.894048 kubelet[2949]: E1213 08:59:08.893404 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.894048 kubelet[2949]: W1213 08:59:08.893410 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.894048 kubelet[2949]: E1213 08:59:08.893422 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.894048 kubelet[2949]: E1213 08:59:08.893558 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.894048 kubelet[2949]: W1213 08:59:08.893564 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.894048 kubelet[2949]: E1213 08:59:08.893573 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.915544 kubelet[2949]: E1213 08:59:08.915305 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.915544 kubelet[2949]: W1213 08:59:08.915343 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.915544 kubelet[2949]: E1213 08:59:08.915379 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.916000 kubelet[2949]: E1213 08:59:08.915956 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.916000 kubelet[2949]: W1213 08:59:08.915983 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.916144 kubelet[2949]: E1213 08:59:08.916048 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.916421 kubelet[2949]: E1213 08:59:08.916363 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.916421 kubelet[2949]: W1213 08:59:08.916384 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.916421 kubelet[2949]: E1213 08:59:08.916418 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.916797 kubelet[2949]: E1213 08:59:08.916746 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.916797 kubelet[2949]: W1213 08:59:08.916790 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.916875 kubelet[2949]: E1213 08:59:08.916823 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.917106 kubelet[2949]: E1213 08:59:08.917090 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.917162 kubelet[2949]: W1213 08:59:08.917108 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.917162 kubelet[2949]: E1213 08:59:08.917145 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.918086 kubelet[2949]: E1213 08:59:08.917486 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918086 kubelet[2949]: W1213 08:59:08.917503 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918086 kubelet[2949]: E1213 08:59:08.917538 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.918086 kubelet[2949]: E1213 08:59:08.917826 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918086 kubelet[2949]: W1213 08:59:08.917836 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918086 kubelet[2949]: E1213 08:59:08.917927 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.918448 kubelet[2949]: E1213 08:59:08.918245 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918448 kubelet[2949]: W1213 08:59:08.918255 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918448 kubelet[2949]: E1213 08:59:08.918418 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918448 kubelet[2949]: W1213 08:59:08.918426 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918618 kubelet[2949]: E1213 08:59:08.918560 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918618 kubelet[2949]: W1213 08:59:08.918568 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918618 kubelet[2949]: E1213 08:59:08.918581 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.918618 kubelet[2949]: E1213 08:59:08.918610 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.918807 kubelet[2949]: E1213 08:59:08.918756 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.918807 kubelet[2949]: W1213 08:59:08.918765 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.918807 kubelet[2949]: E1213 08:59:08.918795 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.919090 kubelet[2949]: E1213 08:59:08.918951 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.919090 kubelet[2949]: W1213 08:59:08.918966 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.919090 kubelet[2949]: E1213 08:59:08.918980 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.919090 kubelet[2949]: E1213 08:59:08.918995 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.919339 kubelet[2949]: E1213 08:59:08.919182 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.919339 kubelet[2949]: W1213 08:59:08.919193 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.919339 kubelet[2949]: E1213 08:59:08.919212 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.919797 kubelet[2949]: E1213 08:59:08.919753 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.919797 kubelet[2949]: W1213 08:59:08.919791 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.919894 kubelet[2949]: E1213 08:59:08.919819 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.920084 kubelet[2949]: E1213 08:59:08.920068 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.920135 kubelet[2949]: W1213 08:59:08.920084 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.920135 kubelet[2949]: E1213 08:59:08.920099 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.920374 kubelet[2949]: E1213 08:59:08.920293 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.920374 kubelet[2949]: W1213 08:59:08.920304 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.920374 kubelet[2949]: E1213 08:59:08.920317 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.920745 kubelet[2949]: E1213 08:59:08.920517 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.920745 kubelet[2949]: W1213 08:59:08.920527 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.920745 kubelet[2949]: E1213 08:59:08.920541 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:08.921002 kubelet[2949]: E1213 08:59:08.920955 2949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 08:59:08.921002 kubelet[2949]: W1213 08:59:08.920981 2949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 08:59:08.921002 kubelet[2949]: E1213 08:59:08.921002 2949 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 08:59:09.604303 containerd[1593]: time="2024-12-13T08:59:09.604244705Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:09.605316 containerd[1593]: time="2024-12-13T08:59:09.605278749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 08:59:09.606407 containerd[1593]: time="2024-12-13T08:59:09.606040993Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:09.610613 containerd[1593]: time="2024-12-13T08:59:09.610572011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:09.611301 containerd[1593]: time="2024-12-13T08:59:09.611252374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.917294472s" Dec 13 08:59:09.611301 containerd[1593]: time="2024-12-13T08:59:09.611298054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 08:59:09.615816 containerd[1593]: time="2024-12-13T08:59:09.615759592Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 08:59:09.635467 containerd[1593]: time="2024-12-13T08:59:09.635407432Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf\"" Dec 13 08:59:09.637443 containerd[1593]: time="2024-12-13T08:59:09.637385080Z" level=info msg="StartContainer for \"9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf\"" Dec 13 08:59:09.681913 systemd[1]: run-containerd-runc-k8s.io-9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf-runc.xsjDh2.mount: Deactivated successfully. Dec 13 08:59:09.714262 containerd[1593]: time="2024-12-13T08:59:09.713713871Z" level=info msg="StartContainer for \"9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf\" returns successfully" Dec 13 08:59:09.731239 kubelet[2949]: E1213 08:59:09.731196 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:09.779419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf-rootfs.mount: Deactivated successfully. Dec 13 08:59:09.903083 kubelet[2949]: I1213 08:59:09.897587 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-54779dbbd6-kjrcr" podStartSLOduration=3.116702268 podStartE2EDuration="4.89753906s" podCreationTimestamp="2024-12-13 08:59:05 +0000 UTC" firstStartedPulling="2024-12-13 08:59:05.912635668 +0000 UTC m=+22.333946375" lastFinishedPulling="2024-12-13 08:59:07.69347238 +0000 UTC m=+24.114783167" observedRunningTime="2024-12-13 08:59:07.895193717 +0000 UTC m=+24.316504424" watchObservedRunningTime="2024-12-13 08:59:09.89753906 +0000 UTC m=+26.318849767" Dec 13 08:59:09.912614 containerd[1593]: time="2024-12-13T08:59:09.912485241Z" level=info msg="shim disconnected" id=9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf namespace=k8s.io Dec 13 08:59:09.912614 containerd[1593]: time="2024-12-13T08:59:09.912580002Z" level=warning msg="cleaning up after shim disconnected" id=9939f73c09e6754bc7fb8a39be1cb3aace81ad81dfe5e9a8e470a606d4410daf namespace=k8s.io Dec 13 08:59:09.912614 containerd[1593]: time="2024-12-13T08:59:09.912605322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:59:10.885749 containerd[1593]: time="2024-12-13T08:59:10.885424777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 08:59:11.731371 kubelet[2949]: E1213 08:59:11.730880 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:13.443995 containerd[1593]: time="2024-12-13T08:59:13.443070326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:13.444541 containerd[1593]: time="2024-12-13T08:59:13.444507292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 08:59:13.445415 containerd[1593]: time="2024-12-13T08:59:13.445379895Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:13.451336 containerd[1593]: time="2024-12-13T08:59:13.451275400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:13.453585 containerd[1593]: time="2024-12-13T08:59:13.453472849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.567995871s" Dec 13 08:59:13.453956 containerd[1593]: time="2024-12-13T08:59:13.453757490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 08:59:13.457077 containerd[1593]: time="2024-12-13T08:59:13.457034543Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 08:59:13.482043 containerd[1593]: time="2024-12-13T08:59:13.481379884Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1\"" Dec 13 08:59:13.486176 containerd[1593]: time="2024-12-13T08:59:13.483263052Z" level=info msg="StartContainer for \"6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1\"" Dec 13 08:59:13.594724 containerd[1593]: time="2024-12-13T08:59:13.594652991Z" level=info msg="StartContainer for \"6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1\" returns successfully" Dec 13 08:59:13.732138 kubelet[2949]: E1213 08:59:13.731621 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:14.239542 containerd[1593]: time="2024-12-13T08:59:14.239112651Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 08:59:14.259653 kubelet[2949]: I1213 08:59:14.259334 2949 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 08:59:14.271799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1-rootfs.mount: Deactivated successfully. Dec 13 08:59:14.305055 kubelet[2949]: I1213 08:59:14.302967 2949 topology_manager.go:215] "Topology Admit Handler" podUID="9e0d6dbd-04fe-48c2-bc18-98799d274260" podNamespace="kube-system" podName="coredns-76f75df574-6sr77" Dec 13 08:59:14.308126 kubelet[2949]: I1213 08:59:14.307223 2949 topology_manager.go:215] "Topology Admit Handler" podUID="194f2772-a0aa-4063-84c0-dbb7d890f78d" podNamespace="kube-system" podName="coredns-76f75df574-ljb78" Dec 13 08:59:14.323370 kubelet[2949]: I1213 08:59:14.319067 2949 topology_manager.go:215] "Topology Admit Handler" podUID="ebf58ab1-e4cb-4792-853d-f90274331666" podNamespace="calico-apiserver" podName="calico-apiserver-55775f8f-2kt75" Dec 13 08:59:14.335516 kubelet[2949]: I1213 08:59:14.335471 2949 topology_manager.go:215] "Topology Admit Handler" podUID="ebb43a46-8408-46f0-b3a8-196003d5a5b9" podNamespace="calico-apiserver" podName="calico-apiserver-55775f8f-ncv49" Dec 13 08:59:14.335979 kubelet[2949]: I1213 08:59:14.335961 2949 topology_manager.go:215] "Topology Admit Handler" podUID="7dd748b5-18dc-47c1-b24f-5ff405335976" podNamespace="calico-system" podName="calico-kube-controllers-645f8cf8f-sbxh2" Dec 13 08:59:14.357794 kubelet[2949]: I1213 08:59:14.357752 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krzkc\" (UniqueName: \"kubernetes.io/projected/ebf58ab1-e4cb-4792-853d-f90274331666-kube-api-access-krzkc\") pod \"calico-apiserver-55775f8f-2kt75\" (UID: \"ebf58ab1-e4cb-4792-853d-f90274331666\") " pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" Dec 13 08:59:14.358405 kubelet[2949]: I1213 08:59:14.358004 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7ml\" (UniqueName: \"kubernetes.io/projected/9e0d6dbd-04fe-48c2-bc18-98799d274260-kube-api-access-bx7ml\") pod \"coredns-76f75df574-6sr77\" (UID: \"9e0d6dbd-04fe-48c2-bc18-98799d274260\") " pod="kube-system/coredns-76f75df574-6sr77" Dec 13 08:59:14.358405 kubelet[2949]: I1213 08:59:14.358107 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmvh\" (UniqueName: \"kubernetes.io/projected/7dd748b5-18dc-47c1-b24f-5ff405335976-kube-api-access-whmvh\") pod \"calico-kube-controllers-645f8cf8f-sbxh2\" (UID: \"7dd748b5-18dc-47c1-b24f-5ff405335976\") " pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" Dec 13 08:59:14.358405 kubelet[2949]: I1213 08:59:14.358138 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jccw\" (UniqueName: \"kubernetes.io/projected/ebb43a46-8408-46f0-b3a8-196003d5a5b9-kube-api-access-5jccw\") pod \"calico-apiserver-55775f8f-ncv49\" (UID: \"ebb43a46-8408-46f0-b3a8-196003d5a5b9\") " pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" Dec 13 08:59:14.358405 kubelet[2949]: I1213 08:59:14.358162 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebb43a46-8408-46f0-b3a8-196003d5a5b9-calico-apiserver-certs\") pod \"calico-apiserver-55775f8f-ncv49\" (UID: \"ebb43a46-8408-46f0-b3a8-196003d5a5b9\") " pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" Dec 13 08:59:14.358405 kubelet[2949]: I1213 08:59:14.358183 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e0d6dbd-04fe-48c2-bc18-98799d274260-config-volume\") pod \"coredns-76f75df574-6sr77\" (UID: \"9e0d6dbd-04fe-48c2-bc18-98799d274260\") " pod="kube-system/coredns-76f75df574-6sr77" Dec 13 08:59:14.358686 kubelet[2949]: I1213 08:59:14.358208 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7dd748b5-18dc-47c1-b24f-5ff405335976-tigera-ca-bundle\") pod \"calico-kube-controllers-645f8cf8f-sbxh2\" (UID: \"7dd748b5-18dc-47c1-b24f-5ff405335976\") " pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" Dec 13 08:59:14.358686 kubelet[2949]: I1213 08:59:14.358232 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn69g\" (UniqueName: \"kubernetes.io/projected/194f2772-a0aa-4063-84c0-dbb7d890f78d-kube-api-access-zn69g\") pod \"coredns-76f75df574-ljb78\" (UID: \"194f2772-a0aa-4063-84c0-dbb7d890f78d\") " pod="kube-system/coredns-76f75df574-ljb78" Dec 13 08:59:14.358686 kubelet[2949]: I1213 08:59:14.358255 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/194f2772-a0aa-4063-84c0-dbb7d890f78d-config-volume\") pod \"coredns-76f75df574-ljb78\" (UID: \"194f2772-a0aa-4063-84c0-dbb7d890f78d\") " pod="kube-system/coredns-76f75df574-ljb78" Dec 13 08:59:14.358686 kubelet[2949]: I1213 08:59:14.358276 2949 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ebf58ab1-e4cb-4792-853d-f90274331666-calico-apiserver-certs\") pod \"calico-apiserver-55775f8f-2kt75\" (UID: \"ebf58ab1-e4cb-4792-853d-f90274331666\") " pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" Dec 13 08:59:14.428405 containerd[1593]: time="2024-12-13T08:59:14.428337194Z" level=info msg="shim disconnected" id=6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1 namespace=k8s.io Dec 13 08:59:14.428906 containerd[1593]: time="2024-12-13T08:59:14.428646995Z" level=warning msg="cleaning up after shim disconnected" id=6e776bbe5294e2137684e492929345bb7858023d9632994b1730c0f5785643f1 namespace=k8s.io Dec 13 08:59:14.428906 containerd[1593]: time="2024-12-13T08:59:14.428670995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 08:59:14.441976 containerd[1593]: time="2024-12-13T08:59:14.441063567Z" level=warning msg="cleanup warnings time=\"2024-12-13T08:59:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 08:59:14.635299 containerd[1593]: time="2024-12-13T08:59:14.635173409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljb78,Uid:194f2772-a0aa-4063-84c0-dbb7d890f78d,Namespace:kube-system,Attempt:0,}" Dec 13 08:59:14.646555 containerd[1593]: time="2024-12-13T08:59:14.646321615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-2kt75,Uid:ebf58ab1-e4cb-4792-853d-f90274331666,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:59:14.648293 containerd[1593]: time="2024-12-13T08:59:14.648051063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-ncv49,Uid:ebb43a46-8408-46f0-b3a8-196003d5a5b9,Namespace:calico-apiserver,Attempt:0,}" Dec 13 08:59:14.664088 containerd[1593]: time="2024-12-13T08:59:14.664034969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f8cf8f-sbxh2,Uid:7dd748b5-18dc-47c1-b24f-5ff405335976,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:14.665602 containerd[1593]: time="2024-12-13T08:59:14.664846892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sr77,Uid:9e0d6dbd-04fe-48c2-bc18-98799d274260,Namespace:kube-system,Attempt:0,}" Dec 13 08:59:14.807732 containerd[1593]: time="2024-12-13T08:59:14.807670163Z" level=error msg="Failed to destroy network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.810260 containerd[1593]: time="2024-12-13T08:59:14.810195053Z" level=error msg="encountered an error cleaning up failed sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.810657 containerd[1593]: time="2024-12-13T08:59:14.810616295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljb78,Uid:194f2772-a0aa-4063-84c0-dbb7d890f78d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.811323 kubelet[2949]: E1213 08:59:14.810902 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.811323 kubelet[2949]: E1213 08:59:14.810965 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ljb78" Dec 13 08:59:14.811323 kubelet[2949]: E1213 08:59:14.810988 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ljb78" Dec 13 08:59:14.812794 kubelet[2949]: E1213 08:59:14.811082 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ljb78_kube-system(194f2772-a0aa-4063-84c0-dbb7d890f78d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ljb78_kube-system(194f2772-a0aa-4063-84c0-dbb7d890f78d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ljb78" podUID="194f2772-a0aa-4063-84c0-dbb7d890f78d" Dec 13 08:59:14.845040 containerd[1593]: time="2024-12-13T08:59:14.844436475Z" level=error msg="Failed to destroy network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.845270 containerd[1593]: time="2024-12-13T08:59:14.845220518Z" level=error msg="encountered an error cleaning up failed sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.845412 containerd[1593]: time="2024-12-13T08:59:14.845377839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sr77,Uid:9e0d6dbd-04fe-48c2-bc18-98799d274260,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.846283 kubelet[2949]: E1213 08:59:14.846249 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.846513 kubelet[2949]: E1213 08:59:14.846320 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6sr77" Dec 13 08:59:14.846513 kubelet[2949]: E1213 08:59:14.846341 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6sr77" Dec 13 08:59:14.846513 kubelet[2949]: E1213 08:59:14.846403 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6sr77_kube-system(9e0d6dbd-04fe-48c2-bc18-98799d274260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6sr77_kube-system(9e0d6dbd-04fe-48c2-bc18-98799d274260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6sr77" podUID="9e0d6dbd-04fe-48c2-bc18-98799d274260" Dec 13 08:59:14.851091 containerd[1593]: time="2024-12-13T08:59:14.849266775Z" level=error msg="Failed to destroy network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.852164 containerd[1593]: time="2024-12-13T08:59:14.851891506Z" level=error msg="encountered an error cleaning up failed sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.852164 containerd[1593]: time="2024-12-13T08:59:14.851973146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-2kt75,Uid:ebf58ab1-e4cb-4792-853d-f90274331666,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.852715 kubelet[2949]: E1213 08:59:14.852519 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.852715 kubelet[2949]: E1213 08:59:14.852578 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" Dec 13 08:59:14.852715 kubelet[2949]: E1213 08:59:14.852600 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" Dec 13 08:59:14.852940 kubelet[2949]: E1213 08:59:14.852656 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55775f8f-2kt75_calico-apiserver(ebf58ab1-e4cb-4792-853d-f90274331666)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55775f8f-2kt75_calico-apiserver(ebf58ab1-e4cb-4792-853d-f90274331666)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" podUID="ebf58ab1-e4cb-4792-853d-f90274331666" Dec 13 08:59:14.865515 containerd[1593]: time="2024-12-13T08:59:14.865456802Z" level=error msg="Failed to destroy network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.866595 containerd[1593]: time="2024-12-13T08:59:14.866393326Z" level=error msg="encountered an error cleaning up failed sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.866595 containerd[1593]: time="2024-12-13T08:59:14.866465926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f8cf8f-sbxh2,Uid:7dd748b5-18dc-47c1-b24f-5ff405335976,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.866831 kubelet[2949]: E1213 08:59:14.866806 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.866890 kubelet[2949]: E1213 08:59:14.866859 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" Dec 13 08:59:14.866890 kubelet[2949]: E1213 08:59:14.866884 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" Dec 13 08:59:14.868099 kubelet[2949]: E1213 08:59:14.866938 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645f8cf8f-sbxh2_calico-system(7dd748b5-18dc-47c1-b24f-5ff405335976)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645f8cf8f-sbxh2_calico-system(7dd748b5-18dc-47c1-b24f-5ff405335976)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" podUID="7dd748b5-18dc-47c1-b24f-5ff405335976" Dec 13 08:59:14.872110 containerd[1593]: time="2024-12-13T08:59:14.872059469Z" level=error msg="Failed to destroy network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.872590 containerd[1593]: time="2024-12-13T08:59:14.872561351Z" level=error msg="encountered an error cleaning up failed sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.872864 containerd[1593]: time="2024-12-13T08:59:14.872764792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-ncv49,Uid:ebb43a46-8408-46f0-b3a8-196003d5a5b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.873390 kubelet[2949]: E1213 08:59:14.873151 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:14.873390 kubelet[2949]: E1213 08:59:14.873211 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" Dec 13 08:59:14.873390 kubelet[2949]: E1213 08:59:14.873230 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" Dec 13 08:59:14.873577 kubelet[2949]: E1213 08:59:14.873283 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-55775f8f-ncv49_calico-apiserver(ebb43a46-8408-46f0-b3a8-196003d5a5b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-55775f8f-ncv49_calico-apiserver(ebb43a46-8408-46f0-b3a8-196003d5a5b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" podUID="ebb43a46-8408-46f0-b3a8-196003d5a5b9" Dec 13 08:59:14.900235 kubelet[2949]: I1213 08:59:14.900091 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:14.905544 kubelet[2949]: I1213 08:59:14.905231 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:14.909830 containerd[1593]: time="2024-12-13T08:59:14.907314255Z" level=info msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" Dec 13 08:59:14.909830 containerd[1593]: time="2024-12-13T08:59:14.907424655Z" level=info msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" Dec 13 08:59:14.909830 containerd[1593]: time="2024-12-13T08:59:14.907551896Z" level=info msg="Ensure that sandbox cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2 in task-service has been cleanup successfully" Dec 13 08:59:14.909830 containerd[1593]: time="2024-12-13T08:59:14.908332619Z" level=info msg="Ensure that sandbox 8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf in task-service has been cleanup successfully" Dec 13 08:59:14.921221 kubelet[2949]: I1213 08:59:14.921190 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:14.922073 containerd[1593]: time="2024-12-13T08:59:14.922010316Z" level=info msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" Dec 13 08:59:14.922385 containerd[1593]: time="2024-12-13T08:59:14.922362797Z" level=info msg="Ensure that sandbox 0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0 in task-service has been cleanup successfully" Dec 13 08:59:14.922952 containerd[1593]: time="2024-12-13T08:59:14.922923719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 08:59:14.928063 kubelet[2949]: I1213 08:59:14.928003 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:14.934723 containerd[1593]: time="2024-12-13T08:59:14.934679088Z" level=info msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" Dec 13 08:59:14.936527 containerd[1593]: time="2024-12-13T08:59:14.936486615Z" level=info msg="Ensure that sandbox 0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac in task-service has been cleanup successfully" Dec 13 08:59:14.942485 kubelet[2949]: I1213 08:59:14.942445 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:14.947719 containerd[1593]: time="2024-12-13T08:59:14.947668742Z" level=info msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" Dec 13 08:59:14.947962 containerd[1593]: time="2024-12-13T08:59:14.947933983Z" level=info msg="Ensure that sandbox c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a in task-service has been cleanup successfully" Dec 13 08:59:15.023675 containerd[1593]: time="2024-12-13T08:59:15.023624576Z" level=error msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" failed" error="failed to destroy network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.024348 kubelet[2949]: E1213 08:59:15.024321 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:15.024459 kubelet[2949]: E1213 08:59:15.024407 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2"} Dec 13 08:59:15.024459 kubelet[2949]: E1213 08:59:15.024445 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"194f2772-a0aa-4063-84c0-dbb7d890f78d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.024807 kubelet[2949]: E1213 08:59:15.024749 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"194f2772-a0aa-4063-84c0-dbb7d890f78d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ljb78" podUID="194f2772-a0aa-4063-84c0-dbb7d890f78d" Dec 13 08:59:15.039152 containerd[1593]: time="2024-12-13T08:59:15.038987160Z" level=error msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" failed" error="failed to destroy network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.040616 kubelet[2949]: E1213 08:59:15.039458 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:15.040616 kubelet[2949]: E1213 08:59:15.039506 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf"} Dec 13 08:59:15.040616 kubelet[2949]: E1213 08:59:15.039540 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebb43a46-8408-46f0-b3a8-196003d5a5b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.040616 kubelet[2949]: E1213 08:59:15.039575 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebb43a46-8408-46f0-b3a8-196003d5a5b9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" podUID="ebb43a46-8408-46f0-b3a8-196003d5a5b9" Dec 13 08:59:15.042248 containerd[1593]: time="2024-12-13T08:59:15.042178613Z" level=error msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" failed" error="failed to destroy network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.042689 kubelet[2949]: E1213 08:59:15.042662 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:15.042804 kubelet[2949]: E1213 08:59:15.042711 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0"} Dec 13 08:59:15.042804 kubelet[2949]: E1213 08:59:15.042766 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebf58ab1-e4cb-4792-853d-f90274331666\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.042804 kubelet[2949]: E1213 08:59:15.042800 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebf58ab1-e4cb-4792-853d-f90274331666\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" podUID="ebf58ab1-e4cb-4792-853d-f90274331666" Dec 13 08:59:15.043849 containerd[1593]: time="2024-12-13T08:59:15.043806780Z" level=error msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" failed" error="failed to destroy network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.044297 kubelet[2949]: E1213 08:59:15.044253 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:15.044297 kubelet[2949]: E1213 08:59:15.044302 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a"} Dec 13 08:59:15.044414 kubelet[2949]: E1213 08:59:15.044362 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dd748b5-18dc-47c1-b24f-5ff405335976\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.044414 kubelet[2949]: E1213 08:59:15.044393 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dd748b5-18dc-47c1-b24f-5ff405335976\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" podUID="7dd748b5-18dc-47c1-b24f-5ff405335976" Dec 13 08:59:15.049210 containerd[1593]: time="2024-12-13T08:59:15.049162362Z" level=error msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" failed" error="failed to destroy network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.049715 kubelet[2949]: E1213 08:59:15.049583 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:15.049715 kubelet[2949]: E1213 08:59:15.049625 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac"} Dec 13 08:59:15.049715 kubelet[2949]: E1213 08:59:15.049660 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e0d6dbd-04fe-48c2-bc18-98799d274260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.049715 kubelet[2949]: E1213 08:59:15.049689 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e0d6dbd-04fe-48c2-bc18-98799d274260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6sr77" podUID="9e0d6dbd-04fe-48c2-bc18-98799d274260" Dec 13 08:59:15.736700 containerd[1593]: time="2024-12-13T08:59:15.736227411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4blkf,Uid:68d5070c-72c1-493f-9630-8955fb2d5362,Namespace:calico-system,Attempt:0,}" Dec 13 08:59:15.808329 containerd[1593]: time="2024-12-13T08:59:15.808183869Z" level=error msg="Failed to destroy network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.809140 containerd[1593]: time="2024-12-13T08:59:15.808655831Z" level=error msg="encountered an error cleaning up failed sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.809500 containerd[1593]: time="2024-12-13T08:59:15.808715271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4blkf,Uid:68d5070c-72c1-493f-9630-8955fb2d5362,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.811163 kubelet[2949]: E1213 08:59:15.809609 2949 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.811163 kubelet[2949]: E1213 08:59:15.809660 2949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:15.811163 kubelet[2949]: E1213 08:59:15.809684 2949 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4blkf" Dec 13 08:59:15.811275 kubelet[2949]: E1213 08:59:15.809748 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4blkf_calico-system(68d5070c-72c1-493f-9630-8955fb2d5362)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4blkf_calico-system(68d5070c-72c1-493f-9630-8955fb2d5362)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:15.811941 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6-shm.mount: Deactivated successfully. Dec 13 08:59:15.946263 kubelet[2949]: I1213 08:59:15.946235 2949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:15.948646 containerd[1593]: time="2024-12-13T08:59:15.948611211Z" level=info msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" Dec 13 08:59:15.948860 containerd[1593]: time="2024-12-13T08:59:15.948838412Z" level=info msg="Ensure that sandbox b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6 in task-service has been cleanup successfully" Dec 13 08:59:15.975866 containerd[1593]: time="2024-12-13T08:59:15.975705564Z" level=error msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" failed" error="failed to destroy network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 08:59:15.976152 kubelet[2949]: E1213 08:59:15.976022 2949 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:15.976152 kubelet[2949]: E1213 08:59:15.976068 2949 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6"} Dec 13 08:59:15.976152 kubelet[2949]: E1213 08:59:15.976103 2949 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"68d5070c-72c1-493f-9630-8955fb2d5362\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 08:59:15.976152 kubelet[2949]: E1213 08:59:15.976131 2949 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"68d5070c-72c1-493f-9630-8955fb2d5362\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4blkf" podUID="68d5070c-72c1-493f-9630-8955fb2d5362" Dec 13 08:59:19.631221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938435670.mount: Deactivated successfully. Dec 13 08:59:19.669291 containerd[1593]: time="2024-12-13T08:59:19.669105287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:19.672172 containerd[1593]: time="2024-12-13T08:59:19.671564538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 08:59:19.675121 containerd[1593]: time="2024-12-13T08:59:19.674283669Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:19.677249 containerd[1593]: time="2024-12-13T08:59:19.677179921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:19.678127 containerd[1593]: time="2024-12-13T08:59:19.677926764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.754295282s" Dec 13 08:59:19.678127 containerd[1593]: time="2024-12-13T08:59:19.677968764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 08:59:19.698724 containerd[1593]: time="2024-12-13T08:59:19.698633171Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 08:59:19.724699 containerd[1593]: time="2024-12-13T08:59:19.724143678Z" level=info msg="CreateContainer within sandbox \"3f37e21a489cc5adc5018fdfbdc6e2189a7e0b6b38845c9358a6284b46de1f98\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"57dcd16c001825c817755dc1c277bb709895e6fce803150804d93adf04f9afc3\"" Dec 13 08:59:19.725593 containerd[1593]: time="2024-12-13T08:59:19.725211682Z" level=info msg="StartContainer for \"57dcd16c001825c817755dc1c277bb709895e6fce803150804d93adf04f9afc3\"" Dec 13 08:59:19.830273 containerd[1593]: time="2024-12-13T08:59:19.829191878Z" level=info msg="StartContainer for \"57dcd16c001825c817755dc1c277bb709895e6fce803150804d93adf04f9afc3\" returns successfully" Dec 13 08:59:19.947734 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 08:59:19.947867 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 08:59:19.981975 kubelet[2949]: I1213 08:59:19.981863 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-htdcw" podStartSLOduration=1.312852888 podStartE2EDuration="14.981811837s" podCreationTimestamp="2024-12-13 08:59:05 +0000 UTC" firstStartedPulling="2024-12-13 08:59:06.009258896 +0000 UTC m=+22.430569603" lastFinishedPulling="2024-12-13 08:59:19.678217885 +0000 UTC m=+36.099528552" observedRunningTime="2024-12-13 08:59:19.979431587 +0000 UTC m=+36.400742294" watchObservedRunningTime="2024-12-13 08:59:19.981811837 +0000 UTC m=+36.403122544" Dec 13 08:59:26.028275 kubelet[2949]: I1213 08:59:26.028171 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:26.732211 containerd[1593]: time="2024-12-13T08:59:26.731816568Z" level=info msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.826 [INFO][4301] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.828 [INFO][4301] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" iface="eth0" netns="/var/run/netns/cni-ae3399a8-76c0-544d-1a24-88ea0e3111ae" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.831 [INFO][4301] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" iface="eth0" netns="/var/run/netns/cni-ae3399a8-76c0-544d-1a24-88ea0e3111ae" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.831 [INFO][4301] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" iface="eth0" netns="/var/run/netns/cni-ae3399a8-76c0-544d-1a24-88ea0e3111ae" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.831 [INFO][4301] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.831 [INFO][4301] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.876 [INFO][4307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.876 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.876 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.887 [WARNING][4307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.887 [INFO][4307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.890 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:26.896177 containerd[1593]: 2024-12-13 08:59:26.894 [INFO][4301] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:26.899261 containerd[1593]: time="2024-12-13T08:59:26.898991438Z" level=info msg="TearDown network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" successfully" Dec 13 08:59:26.899261 containerd[1593]: time="2024-12-13T08:59:26.899094239Z" level=info msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" returns successfully" Dec 13 08:59:26.899223 systemd[1]: run-netns-cni\x2dae3399a8\x2d76c0\x2d544d\x2d1a24\x2d88ea0e3111ae.mount: Deactivated successfully. Dec 13 08:59:26.900542 containerd[1593]: time="2024-12-13T08:59:26.900506325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljb78,Uid:194f2772-a0aa-4063-84c0-dbb7d890f78d,Namespace:kube-system,Attempt:1,}" Dec 13 08:59:27.156850 systemd-networkd[1243]: cali1f64888e142: Link UP Dec 13 08:59:27.158848 systemd-networkd[1243]: cali1f64888e142: Gained carrier Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:26.955 [INFO][4315] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:26.977 [INFO][4315] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0 coredns-76f75df574- kube-system 194f2772-a0aa-4063-84c0-dbb7d890f78d 779 0 2024-12-13 08:58:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 coredns-76f75df574-ljb78 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f64888e142 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:26.977 [INFO][4315] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.061 [INFO][4338] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" HandleID="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.091 [INFO][4338] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" HandleID="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102670), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"coredns-76f75df574-ljb78", "timestamp":"2024-12-13 08:59:27.061106967 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.091 [INFO][4338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.091 [INFO][4338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.091 [INFO][4338] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.095 [INFO][4338] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.103 [INFO][4338] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.110 [INFO][4338] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.113 [INFO][4338] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.119 [INFO][4338] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.119 [INFO][4338] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.121 [INFO][4338] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8 Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.128 [INFO][4338] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.135 [INFO][4338] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.129/26] block=192.168.75.128/26 handle="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.135 [INFO][4338] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.129/26] handle="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.135 [INFO][4338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:27.188287 containerd[1593]: 2024-12-13 08:59:27.135 [INFO][4338] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.129/26] IPv6=[] ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" HandleID="k8s-pod-network.59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.193322 containerd[1593]: 2024-12-13 08:59:27.139 [INFO][4315] cni-plugin/k8s.go 386: Populated endpoint ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"194f2772-a0aa-4063-84c0-dbb7d890f78d", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"coredns-76f75df574-ljb78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f64888e142", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:27.193322 containerd[1593]: 2024-12-13 08:59:27.139 [INFO][4315] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.129/32] ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.193322 containerd[1593]: 2024-12-13 08:59:27.139 [INFO][4315] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f64888e142 ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.193322 containerd[1593]: 2024-12-13 08:59:27.157 [INFO][4315] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.193322 containerd[1593]: 2024-12-13 08:59:27.159 [INFO][4315] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"194f2772-a0aa-4063-84c0-dbb7d890f78d", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8", Pod:"coredns-76f75df574-ljb78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f64888e142", MAC:"2a:0c:f6:9a:18:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:27.193528 containerd[1593]: 2024-12-13 08:59:27.176 [INFO][4315] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8" Namespace="kube-system" Pod="coredns-76f75df574-ljb78" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:27.250803 containerd[1593]: time="2024-12-13T08:59:27.248589005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:27.250803 containerd[1593]: time="2024-12-13T08:59:27.250584093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:27.250803 containerd[1593]: time="2024-12-13T08:59:27.250603333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:27.252082 containerd[1593]: time="2024-12-13T08:59:27.251167975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:27.343189 containerd[1593]: time="2024-12-13T08:59:27.341845641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ljb78,Uid:194f2772-a0aa-4063-84c0-dbb7d890f78d,Namespace:kube-system,Attempt:1,} returns sandbox id \"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8\"" Dec 13 08:59:27.381031 containerd[1593]: time="2024-12-13T08:59:27.379308721Z" level=info msg="CreateContainer within sandbox \"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:59:27.405259 containerd[1593]: time="2024-12-13T08:59:27.405198351Z" level=info msg="CreateContainer within sandbox \"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"aa44d2e9dcc85578258a18a08fa25daa8074ea24db3be6f0fce76cea5d9debf0\"" Dec 13 08:59:27.406924 containerd[1593]: time="2024-12-13T08:59:27.406881958Z" level=info msg="StartContainer for \"aa44d2e9dcc85578258a18a08fa25daa8074ea24db3be6f0fce76cea5d9debf0\"" Dec 13 08:59:27.486144 containerd[1593]: time="2024-12-13T08:59:27.485983894Z" level=info msg="StartContainer for \"aa44d2e9dcc85578258a18a08fa25daa8074ea24db3be6f0fce76cea5d9debf0\" returns successfully" Dec 13 08:59:27.735614 containerd[1593]: time="2024-12-13T08:59:27.735490556Z" level=info msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" Dec 13 08:59:27.737812 containerd[1593]: time="2024-12-13T08:59:27.735652676Z" level=info msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.874 [INFO][4488] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.874 [INFO][4488] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" iface="eth0" netns="/var/run/netns/cni-919c41f5-2210-f5d9-7acd-9a61b2bf31d1" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.875 [INFO][4488] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" iface="eth0" netns="/var/run/netns/cni-919c41f5-2210-f5d9-7acd-9a61b2bf31d1" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.876 [INFO][4488] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" iface="eth0" netns="/var/run/netns/cni-919c41f5-2210-f5d9-7acd-9a61b2bf31d1" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.876 [INFO][4488] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.876 [INFO][4488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.937 [INFO][4509] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.938 [INFO][4509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.938 [INFO][4509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.951 [WARNING][4509] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.951 [INFO][4509] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.955 [INFO][4509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:27.964836 containerd[1593]: 2024-12-13 08:59:27.959 [INFO][4488] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:27.967759 containerd[1593]: time="2024-12-13T08:59:27.965091333Z" level=info msg="TearDown network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" successfully" Dec 13 08:59:27.967759 containerd[1593]: time="2024-12-13T08:59:27.967495823Z" level=info msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" returns successfully" Dec 13 08:59:27.969203 containerd[1593]: time="2024-12-13T08:59:27.969096870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4blkf,Uid:68d5070c-72c1-493f-9630-8955fb2d5362,Namespace:calico-system,Attempt:1,}" Dec 13 08:59:27.971124 systemd[1]: run-netns-cni\x2d919c41f5\x2d2210\x2df5d9\x2d7acd\x2d9a61b2bf31d1.mount: Deactivated successfully. Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.889 [INFO][4496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.889 [INFO][4496] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" iface="eth0" netns="/var/run/netns/cni-6260526a-13f5-d04c-2580-acc930e58033" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.890 [INFO][4496] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" iface="eth0" netns="/var/run/netns/cni-6260526a-13f5-d04c-2580-acc930e58033" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.893 [INFO][4496] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" iface="eth0" netns="/var/run/netns/cni-6260526a-13f5-d04c-2580-acc930e58033" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.894 [INFO][4496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.894 [INFO][4496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.942 [INFO][4513] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.943 [INFO][4513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.955 [INFO][4513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.976 [WARNING][4513] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.976 [INFO][4513] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.980 [INFO][4513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:27.986566 containerd[1593]: 2024-12-13 08:59:27.982 [INFO][4496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:27.992154 containerd[1593]: time="2024-12-13T08:59:27.988082870Z" level=info msg="TearDown network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" successfully" Dec 13 08:59:27.992154 containerd[1593]: time="2024-12-13T08:59:27.988122511Z" level=info msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" returns successfully" Dec 13 08:59:27.989936 systemd[1]: run-netns-cni\x2d6260526a\x2d13f5\x2dd04c\x2d2580\x2dacc930e58033.mount: Deactivated successfully. Dec 13 08:59:27.995418 containerd[1593]: time="2024-12-13T08:59:27.995362261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f8cf8f-sbxh2,Uid:7dd748b5-18dc-47c1-b24f-5ff405335976,Namespace:calico-system,Attempt:1,}" Dec 13 08:59:28.061950 kubelet[2949]: I1213 08:59:28.060895 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ljb78" podStartSLOduration=30.0608483 podStartE2EDuration="30.0608483s" podCreationTimestamp="2024-12-13 08:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:59:28.059521255 +0000 UTC m=+44.480831962" watchObservedRunningTime="2024-12-13 08:59:28.0608483 +0000 UTC m=+44.482159007" Dec 13 08:59:28.167472 kernel: bpftool[4555]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 08:59:28.299877 systemd-networkd[1243]: calid7df9695e01: Link UP Dec 13 08:59:28.304948 systemd-networkd[1243]: calid7df9695e01: Gained carrier Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.151 [INFO][4523] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0 csi-node-driver- calico-system 68d5070c-72c1-493f-9630-8955fb2d5362 796 0 2024-12-13 08:59:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 csi-node-driver-4blkf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid7df9695e01 [] []}} ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.153 [INFO][4523] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.216 [INFO][4560] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" HandleID="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.235 [INFO][4560] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" HandleID="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"csi-node-driver-4blkf", "timestamp":"2024-12-13 08:59:28.216642164 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.235 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.235 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.235 [INFO][4560] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.238 [INFO][4560] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.247 [INFO][4560] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.257 [INFO][4560] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.260 [INFO][4560] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.264 [INFO][4560] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.264 [INFO][4560] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.266 [INFO][4560] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967 Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.274 [INFO][4560] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.287 [INFO][4560] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.130/26] block=192.168.75.128/26 handle="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.287 [INFO][4560] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.130/26] handle="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.287 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:28.340198 containerd[1593]: 2024-12-13 08:59:28.287 [INFO][4560] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.130/26] IPv6=[] ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" HandleID="k8s-pod-network.dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.292 [INFO][4523] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68d5070c-72c1-493f-9630-8955fb2d5362", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"csi-node-driver-4blkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7df9695e01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.293 [INFO][4523] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.130/32] ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.293 [INFO][4523] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7df9695e01 ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.304 [INFO][4523] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.306 [INFO][4523] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68d5070c-72c1-493f-9630-8955fb2d5362", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967", Pod:"csi-node-driver-4blkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7df9695e01", MAC:"26:92:eb:a4:cd:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:28.340825 containerd[1593]: 2024-12-13 08:59:28.335 [INFO][4523] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967" Namespace="calico-system" Pod="csi-node-driver-4blkf" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:28.381383 containerd[1593]: time="2024-12-13T08:59:28.378941816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:28.381383 containerd[1593]: time="2024-12-13T08:59:28.379027216Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:28.381383 containerd[1593]: time="2024-12-13T08:59:28.379043416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:28.381383 containerd[1593]: time="2024-12-13T08:59:28.379146137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:28.410779 systemd-networkd[1243]: calia3ffc869158: Link UP Dec 13 08:59:28.411121 systemd-networkd[1243]: calia3ffc869158: Gained carrier Dec 13 08:59:28.486979 systemd-networkd[1243]: vxlan.calico: Link UP Dec 13 08:59:28.486988 systemd-networkd[1243]: vxlan.calico: Gained carrier Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.152 [INFO][4532] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0 calico-kube-controllers-645f8cf8f- calico-system 7dd748b5-18dc-47c1-b24f-5ff405335976 797 0 2024-12-13 08:59:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645f8cf8f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 calico-kube-controllers-645f8cf8f-sbxh2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3ffc869158 [] []}} ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.153 [INFO][4532] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.211 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" HandleID="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.236 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" HandleID="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"calico-kube-controllers-645f8cf8f-sbxh2", "timestamp":"2024-12-13 08:59:28.2109425 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.238 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.287 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.288 [INFO][4556] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.295 [INFO][4556] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.326 [INFO][4556] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.352 [INFO][4556] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.356 [INFO][4556] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.361 [INFO][4556] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.361 [INFO][4556] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.365 [INFO][4556] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.379 [INFO][4556] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.399 [INFO][4556] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.131/26] block=192.168.75.128/26 handle="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.399 [INFO][4556] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.131/26] handle="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.400 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:28.500954 containerd[1593]: 2024-12-13 08:59:28.400 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.131/26] IPv6=[] ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" HandleID="k8s-pod-network.4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.404 [INFO][4532] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0", GenerateName:"calico-kube-controllers-645f8cf8f-", Namespace:"calico-system", SelfLink:"", UID:"7dd748b5-18dc-47c1-b24f-5ff405335976", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f8cf8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"calico-kube-controllers-645f8cf8f-sbxh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ffc869158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.404 [INFO][4532] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.131/32] ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.404 [INFO][4532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ffc869158 ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.407 [INFO][4532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.407 [INFO][4532] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0", GenerateName:"calico-kube-controllers-645f8cf8f-", Namespace:"calico-system", SelfLink:"", UID:"7dd748b5-18dc-47c1-b24f-5ff405335976", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f8cf8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c", Pod:"calico-kube-controllers-645f8cf8f-sbxh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ffc869158", MAC:"fe:44:cd:01:7d:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:28.502582 containerd[1593]: 2024-12-13 08:59:28.432 [INFO][4532] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c" Namespace="calico-system" Pod="calico-kube-controllers-645f8cf8f-sbxh2" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:28.536959 containerd[1593]: time="2024-12-13T08:59:28.536193366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4blkf,Uid:68d5070c-72c1-493f-9630-8955fb2d5362,Namespace:calico-system,Attempt:1,} returns sandbox id \"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967\"" Dec 13 08:59:28.566093 containerd[1593]: time="2024-12-13T08:59:28.565876652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 08:59:28.576950 containerd[1593]: time="2024-12-13T08:59:28.576383897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:28.577685 containerd[1593]: time="2024-12-13T08:59:28.577394421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:28.577685 containerd[1593]: time="2024-12-13T08:59:28.577422622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:28.577685 containerd[1593]: time="2024-12-13T08:59:28.577535462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:28.642466 containerd[1593]: time="2024-12-13T08:59:28.642388298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645f8cf8f-sbxh2,Uid:7dd748b5-18dc-47c1-b24f-5ff405335976,Namespace:calico-system,Attempt:1,} returns sandbox id \"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c\"" Dec 13 08:59:28.731570 containerd[1593]: time="2024-12-13T08:59:28.731197957Z" level=info msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" Dec 13 08:59:28.731985 containerd[1593]: time="2024-12-13T08:59:28.731847360Z" level=info msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" Dec 13 08:59:28.738912 containerd[1593]: time="2024-12-13T08:59:28.733802448Z" level=info msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" Dec 13 08:59:28.931493 systemd-networkd[1243]: cali1f64888e142: Gained IPv6LL Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.879 [INFO][4763] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.879 [INFO][4763] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" iface="eth0" netns="/var/run/netns/cni-8623b698-16d7-75c0-1f93-279f2986835c" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.880 [INFO][4763] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" iface="eth0" netns="/var/run/netns/cni-8623b698-16d7-75c0-1f93-279f2986835c" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.880 [INFO][4763] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" iface="eth0" netns="/var/run/netns/cni-8623b698-16d7-75c0-1f93-279f2986835c" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.880 [INFO][4763] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.880 [INFO][4763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.933 [INFO][4798] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.942 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.943 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.966 [WARNING][4798] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.966 [INFO][4798] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.969 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:28.974853 containerd[1593]: 2024-12-13 08:59:28.972 [INFO][4763] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:28.979696 systemd[1]: run-netns-cni\x2d8623b698\x2d16d7\x2d75c0\x2d1f93\x2d279f2986835c.mount: Deactivated successfully. Dec 13 08:59:28.981498 containerd[1593]: time="2024-12-13T08:59:28.977138885Z" level=info msg="TearDown network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" successfully" Dec 13 08:59:28.981498 containerd[1593]: time="2024-12-13T08:59:28.980085337Z" level=info msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" returns successfully" Dec 13 08:59:28.981498 containerd[1593]: time="2024-12-13T08:59:28.980793340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-ncv49,Uid:ebb43a46-8408-46f0-b3a8-196003d5a5b9,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.937 [INFO][4768] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.940 [INFO][4768] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" iface="eth0" netns="/var/run/netns/cni-58624bcb-4d9f-9123-d42f-e8ba1e8d24e4" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.942 [INFO][4768] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" iface="eth0" netns="/var/run/netns/cni-58624bcb-4d9f-9123-d42f-e8ba1e8d24e4" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.942 [INFO][4768] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" iface="eth0" netns="/var/run/netns/cni-58624bcb-4d9f-9123-d42f-e8ba1e8d24e4" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.943 [INFO][4768] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.943 [INFO][4768] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:28.999 [INFO][4807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.000 [INFO][4807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.000 [INFO][4807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.020 [WARNING][4807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.020 [INFO][4807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.024 [INFO][4807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:29.033582 containerd[1593]: 2024-12-13 08:59:29.028 [INFO][4768] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:29.038568 containerd[1593]: time="2024-12-13T08:59:29.037980624Z" level=info msg="TearDown network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" successfully" Dec 13 08:59:29.038568 containerd[1593]: time="2024-12-13T08:59:29.038047745Z" level=info msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" returns successfully" Dec 13 08:59:29.042580 systemd[1]: run-netns-cni\x2d58624bcb\x2d4d9f\x2d9123\x2dd42f\x2de8ba1e8d24e4.mount: Deactivated successfully. Dec 13 08:59:29.045604 containerd[1593]: time="2024-12-13T08:59:29.045381896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-2kt75,Uid:ebf58ab1-e4cb-4792-853d-f90274331666,Namespace:calico-apiserver,Attempt:1,}" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.943 [INFO][4773] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.943 [INFO][4773] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" iface="eth0" netns="/var/run/netns/cni-d4f3a219-0348-7289-0c24-efd8d2c35582" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.944 [INFO][4773] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" iface="eth0" netns="/var/run/netns/cni-d4f3a219-0348-7289-0c24-efd8d2c35582" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.944 [INFO][4773] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" iface="eth0" netns="/var/run/netns/cni-d4f3a219-0348-7289-0c24-efd8d2c35582" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.944 [INFO][4773] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:28.944 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.009 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.009 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.024 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.049 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.049 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.054 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:29.062862 containerd[1593]: 2024-12-13 08:59:29.058 [INFO][4773] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:29.064501 containerd[1593]: time="2024-12-13T08:59:29.064303057Z" level=info msg="TearDown network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" successfully" Dec 13 08:59:29.064562 containerd[1593]: time="2024-12-13T08:59:29.064504818Z" level=info msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" returns successfully" Dec 13 08:59:29.066240 containerd[1593]: time="2024-12-13T08:59:29.065639422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sr77,Uid:9e0d6dbd-04fe-48c2-bc18-98799d274260,Namespace:kube-system,Attempt:1,}" Dec 13 08:59:29.295158 systemd-networkd[1243]: cali6dd09b9c85f: Link UP Dec 13 08:59:29.295771 systemd-networkd[1243]: cali6dd09b9c85f: Gained carrier Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.117 [INFO][4819] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0 calico-apiserver-55775f8f- calico-apiserver ebb43a46-8408-46f0-b3a8-196003d5a5b9 818 0 2024-12-13 08:59:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55775f8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 calico-apiserver-55775f8f-ncv49 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6dd09b9c85f [] []}} ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.117 [INFO][4819] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.187 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" HandleID="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.218 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" HandleID="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003bcb40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"calico-apiserver-55775f8f-ncv49", "timestamp":"2024-12-13 08:59:29.187876264 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.219 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.219 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.219 [INFO][4851] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.224 [INFO][4851] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.236 [INFO][4851] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.247 [INFO][4851] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.251 [INFO][4851] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.256 [INFO][4851] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.256 [INFO][4851] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.259 [INFO][4851] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0 Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.265 [INFO][4851] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4851] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.132/26] block=192.168.75.128/26 handle="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4851] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.132/26] handle="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:29.329569 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.132/26] IPv6=[] ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" HandleID="k8s-pod-network.b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.285 [INFO][4819] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebb43a46-8408-46f0-b3a8-196003d5a5b9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"calico-apiserver-55775f8f-ncv49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd09b9c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.285 [INFO][4819] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.132/32] ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.285 [INFO][4819] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dd09b9c85f ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.296 [INFO][4819] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.298 [INFO][4819] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebb43a46-8408-46f0-b3a8-196003d5a5b9", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0", Pod:"calico-apiserver-55775f8f-ncv49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd09b9c85f", MAC:"0e:f0:a1:88:3b:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.330265 containerd[1593]: 2024-12-13 08:59:29.324 [INFO][4819] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-ncv49" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:29.375120 systemd-networkd[1243]: cali21604a22406: Link UP Dec 13 08:59:29.378207 systemd-networkd[1243]: cali21604a22406: Gained carrier Dec 13 08:59:29.378454 systemd-networkd[1243]: calid7df9695e01: Gained IPv6LL Dec 13 08:59:29.384511 containerd[1593]: time="2024-12-13T08:59:29.380715727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:29.384511 containerd[1593]: time="2024-12-13T08:59:29.380781208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:29.384511 containerd[1593]: time="2024-12-13T08:59:29.380795688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.384511 containerd[1593]: time="2024-12-13T08:59:29.380905288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.152 [INFO][4841] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0 coredns-76f75df574- kube-system 9e0d6dbd-04fe-48c2-bc18-98799d274260 820 0 2024-12-13 08:58:58 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 coredns-76f75df574-6sr77 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21604a22406 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.153 [INFO][4841] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.203 [INFO][4862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" HandleID="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.228 [INFO][4862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" HandleID="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cd30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"coredns-76f75df574-6sr77", "timestamp":"2024-12-13 08:59:29.20322257 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.228 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.283 [INFO][4862] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.287 [INFO][4862] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.306 [INFO][4862] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.325 [INFO][4862] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.331 [INFO][4862] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.336 [INFO][4862] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.336 [INFO][4862] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.340 [INFO][4862] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.348 [INFO][4862] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.361 [INFO][4862] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.133/26] block=192.168.75.128/26 handle="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.361 [INFO][4862] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.133/26] handle="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.361 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:29.422498 containerd[1593]: 2024-12-13 08:59:29.361 [INFO][4862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.133/26] IPv6=[] ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" HandleID="k8s-pod-network.ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.423551 containerd[1593]: 2024-12-13 08:59:29.364 [INFO][4841] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e0d6dbd-04fe-48c2-bc18-98799d274260", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"coredns-76f75df574-6sr77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21604a22406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.423551 containerd[1593]: 2024-12-13 08:59:29.365 [INFO][4841] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.133/32] ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.423551 containerd[1593]: 2024-12-13 08:59:29.365 [INFO][4841] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21604a22406 ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.423551 containerd[1593]: 2024-12-13 08:59:29.383 [INFO][4841] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.423551 containerd[1593]: 2024-12-13 08:59:29.386 [INFO][4841] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e0d6dbd-04fe-48c2-bc18-98799d274260", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a", Pod:"coredns-76f75df574-6sr77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21604a22406", MAC:"b6:c8:fd:77:31:92", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.423903 containerd[1593]: 2024-12-13 08:59:29.413 [INFO][4841] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a" Namespace="kube-system" Pod="coredns-76f75df574-6sr77" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:29.460537 systemd-networkd[1243]: calif3ed80bd617: Link UP Dec 13 08:59:29.464194 systemd-networkd[1243]: calif3ed80bd617: Gained carrier Dec 13 08:59:29.488642 containerd[1593]: time="2024-12-13T08:59:29.488436987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:29.488981 containerd[1593]: time="2024-12-13T08:59:29.488570948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:29.488981 containerd[1593]: time="2024-12-13T08:59:29.488585588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.489605 containerd[1593]: time="2024-12-13T08:59:29.489244271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.167 [INFO][4830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0 calico-apiserver-55775f8f- calico-apiserver ebf58ab1-e4cb-4792-853d-f90274331666 819 0 2024-12-13 08:59:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:55775f8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-1-0-c10bd8c210 calico-apiserver-55775f8f-2kt75 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif3ed80bd617 [] []}} ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.168 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.253 [INFO][4871] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" HandleID="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.281 [INFO][4871] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" HandleID="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-1-0-c10bd8c210", "pod":"calico-apiserver-55775f8f-2kt75", "timestamp":"2024-12-13 08:59:29.253368624 +0000 UTC"}, Hostname:"ci-4081-2-1-0-c10bd8c210", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.282 [INFO][4871] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.362 [INFO][4871] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.362 [INFO][4871] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-1-0-c10bd8c210' Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.366 [INFO][4871] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.377 [INFO][4871] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.398 [INFO][4871] ipam/ipam.go 489: Trying affinity for 192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.403 [INFO][4871] ipam/ipam.go 155: Attempting to load block cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.416 [INFO][4871] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.128/26 host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.416 [INFO][4871] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.128/26 handle="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.420 [INFO][4871] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556 Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.429 [INFO][4871] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.75.128/26 handle="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.446 [INFO][4871] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.75.134/26] block=192.168.75.128/26 handle="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.446 [INFO][4871] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.134/26] handle="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" host="ci-4081-2-1-0-c10bd8c210" Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.446 [INFO][4871] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:29.491289 containerd[1593]: 2024-12-13 08:59:29.446 [INFO][4871] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.75.134/26] IPv6=[] ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" HandleID="k8s-pod-network.b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.455 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf58ab1-e4cb-4792-853d-f90274331666", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"", Pod:"calico-apiserver-55775f8f-2kt75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3ed80bd617", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.455 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.75.134/32] ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.455 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3ed80bd617 ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.462 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.463 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf58ab1-e4cb-4792-853d-f90274331666", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556", Pod:"calico-apiserver-55775f8f-2kt75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3ed80bd617", MAC:"02:35:02:ca:e4:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:29.492057 containerd[1593]: 2024-12-13 08:59:29.482 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556" Namespace="calico-apiserver" Pod="calico-apiserver-55775f8f-2kt75" WorkloadEndpoint="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:29.513095 containerd[1593]: time="2024-12-13T08:59:29.510125880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-ncv49,Uid:ebb43a46-8408-46f0-b3a8-196003d5a5b9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0\"" Dec 13 08:59:29.535707 containerd[1593]: time="2024-12-13T08:59:29.534733065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 08:59:29.537115 containerd[1593]: time="2024-12-13T08:59:29.537006034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 08:59:29.537397 containerd[1593]: time="2024-12-13T08:59:29.537311916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.537874 containerd[1593]: time="2024-12-13T08:59:29.537842198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 08:59:29.567289 containerd[1593]: time="2024-12-13T08:59:29.567168483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6sr77,Uid:9e0d6dbd-04fe-48c2-bc18-98799d274260,Namespace:kube-system,Attempt:1,} returns sandbox id \"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a\"" Dec 13 08:59:29.574200 containerd[1593]: time="2024-12-13T08:59:29.574157313Z" level=info msg="CreateContainer within sandbox \"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 08:59:29.593945 containerd[1593]: time="2024-12-13T08:59:29.593799037Z" level=info msg="CreateContainer within sandbox \"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e4c62ab42da0e666164dddc5584f2969a2708e73ede42afc06c49b1d71d02dc\"" Dec 13 08:59:29.596150 containerd[1593]: time="2024-12-13T08:59:29.596074887Z" level=info msg="StartContainer for \"9e4c62ab42da0e666164dddc5584f2969a2708e73ede42afc06c49b1d71d02dc\"" Dec 13 08:59:29.614410 containerd[1593]: time="2024-12-13T08:59:29.614311564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-55775f8f-2kt75,Uid:ebf58ab1-e4cb-4792-853d-f90274331666,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556\"" Dec 13 08:59:29.633311 systemd-networkd[1243]: calia3ffc869158: Gained IPv6LL Dec 13 08:59:29.672027 containerd[1593]: time="2024-12-13T08:59:29.671970010Z" level=info msg="StartContainer for \"9e4c62ab42da0e666164dddc5584f2969a2708e73ede42afc06c49b1d71d02dc\" returns successfully" Dec 13 08:59:29.922285 systemd[1]: run-netns-cni\x2dd4f3a219\x2d0348\x2d7289\x2d0c24\x2defd8d2c35582.mount: Deactivated successfully. Dec 13 08:59:29.981626 containerd[1593]: time="2024-12-13T08:59:29.981565252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:29.982721 containerd[1593]: time="2024-12-13T08:59:29.982680697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 08:59:29.983561 containerd[1593]: time="2024-12-13T08:59:29.983267779Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:29.986001 containerd[1593]: time="2024-12-13T08:59:29.985950151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:29.986957 containerd[1593]: time="2024-12-13T08:59:29.986795994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.420870741s" Dec 13 08:59:29.986957 containerd[1593]: time="2024-12-13T08:59:29.986839154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 08:59:29.988368 containerd[1593]: time="2024-12-13T08:59:29.988325041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 08:59:29.990418 containerd[1593]: time="2024-12-13T08:59:29.990380850Z" level=info msg="CreateContainer within sandbox \"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 08:59:30.012345 containerd[1593]: time="2024-12-13T08:59:30.012302703Z" level=info msg="CreateContainer within sandbox \"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"040bab4918fdf746cca25b8d5c1c864390770b93de06359044ec7f76cff3bbfe\"" Dec 13 08:59:30.013298 containerd[1593]: time="2024-12-13T08:59:30.013254267Z" level=info msg="StartContainer for \"040bab4918fdf746cca25b8d5c1c864390770b93de06359044ec7f76cff3bbfe\"" Dec 13 08:59:30.099056 kubelet[2949]: I1213 08:59:30.096541 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6sr77" podStartSLOduration=32.096491863 podStartE2EDuration="32.096491863s" podCreationTimestamp="2024-12-13 08:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 08:59:30.090927119 +0000 UTC m=+46.512237826" watchObservedRunningTime="2024-12-13 08:59:30.096491863 +0000 UTC m=+46.517802570" Dec 13 08:59:30.111611 containerd[1593]: time="2024-12-13T08:59:30.108236913Z" level=info msg="StartContainer for \"040bab4918fdf746cca25b8d5c1c864390770b93de06359044ec7f76cff3bbfe\" returns successfully" Dec 13 08:59:30.466786 systemd-networkd[1243]: vxlan.calico: Gained IPv6LL Dec 13 08:59:30.914113 systemd-networkd[1243]: cali6dd09b9c85f: Gained IPv6LL Dec 13 08:59:30.977462 systemd-networkd[1243]: calif3ed80bd617: Gained IPv6LL Dec 13 08:59:31.427221 systemd-networkd[1243]: cali21604a22406: Gained IPv6LL Dec 13 08:59:31.921061 containerd[1593]: time="2024-12-13T08:59:31.920983669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.922664 containerd[1593]: time="2024-12-13T08:59:31.922467675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 08:59:31.924476 containerd[1593]: time="2024-12-13T08:59:31.923570080Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.926479 containerd[1593]: time="2024-12-13T08:59:31.926430212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:31.927159 containerd[1593]: time="2024-12-13T08:59:31.927077895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.938230892s" Dec 13 08:59:31.927159 containerd[1593]: time="2024-12-13T08:59:31.927124255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 08:59:31.927779 containerd[1593]: time="2024-12-13T08:59:31.927741978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:59:31.945373 containerd[1593]: time="2024-12-13T08:59:31.944262529Z" level=info msg="CreateContainer within sandbox \"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 08:59:31.964468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046740024.mount: Deactivated successfully. Dec 13 08:59:31.972820 containerd[1593]: time="2024-12-13T08:59:31.972562330Z" level=info msg="CreateContainer within sandbox \"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727\"" Dec 13 08:59:31.973727 containerd[1593]: time="2024-12-13T08:59:31.973609334Z" level=info msg="StartContainer for \"1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727\"" Dec 13 08:59:32.048506 containerd[1593]: time="2024-12-13T08:59:32.048425295Z" level=info msg="StartContainer for \"1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727\" returns successfully" Dec 13 08:59:32.108034 kubelet[2949]: I1213 08:59:32.107771 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-645f8cf8f-sbxh2" podStartSLOduration=23.827818173 podStartE2EDuration="27.107605029s" podCreationTimestamp="2024-12-13 08:59:05 +0000 UTC" firstStartedPulling="2024-12-13 08:59:28.648185683 +0000 UTC m=+45.069496390" lastFinishedPulling="2024-12-13 08:59:31.927972459 +0000 UTC m=+48.349283246" observedRunningTime="2024-12-13 08:59:32.106627265 +0000 UTC m=+48.527938012" watchObservedRunningTime="2024-12-13 08:59:32.107605029 +0000 UTC m=+48.528915736" Dec 13 08:59:33.917441 containerd[1593]: time="2024-12-13T08:59:33.915772668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:33.917441 containerd[1593]: time="2024-12-13T08:59:33.917270954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 08:59:33.920145 containerd[1593]: time="2024-12-13T08:59:33.920100727Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:33.923388 containerd[1593]: time="2024-12-13T08:59:33.923345101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:33.924105 containerd[1593]: time="2024-12-13T08:59:33.924064184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.996279966s" Dec 13 08:59:33.924179 containerd[1593]: time="2024-12-13T08:59:33.924106104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 08:59:33.924947 containerd[1593]: time="2024-12-13T08:59:33.924800867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 08:59:33.928527 containerd[1593]: time="2024-12-13T08:59:33.928416562Z" level=info msg="CreateContainer within sandbox \"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:59:33.953308 containerd[1593]: time="2024-12-13T08:59:33.953251909Z" level=info msg="CreateContainer within sandbox \"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fd3765b550aed28710b79f4dc76d33b91987610431df74a314f065476a228d46\"" Dec 13 08:59:33.954409 containerd[1593]: time="2024-12-13T08:59:33.954359834Z" level=info msg="StartContainer for \"fd3765b550aed28710b79f4dc76d33b91987610431df74a314f065476a228d46\"" Dec 13 08:59:34.053375 containerd[1593]: time="2024-12-13T08:59:34.053331939Z" level=info msg="StartContainer for \"fd3765b550aed28710b79f4dc76d33b91987610431df74a314f065476a228d46\" returns successfully" Dec 13 08:59:34.313280 containerd[1593]: time="2024-12-13T08:59:34.313106456Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:34.314284 containerd[1593]: time="2024-12-13T08:59:34.314237541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 08:59:34.317323 containerd[1593]: time="2024-12-13T08:59:34.317277074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 392.427927ms" Dec 13 08:59:34.317587 containerd[1593]: time="2024-12-13T08:59:34.317484475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 08:59:34.318403 containerd[1593]: time="2024-12-13T08:59:34.318257078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 08:59:34.322822 containerd[1593]: time="2024-12-13T08:59:34.322775298Z" level=info msg="CreateContainer within sandbox \"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 08:59:34.340941 containerd[1593]: time="2024-12-13T08:59:34.340873216Z" level=info msg="CreateContainer within sandbox \"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"efa42bba020b6254ed89502a2fe7ae45dd6c235adeb01243e0dc682c3c1151c7\"" Dec 13 08:59:34.347064 containerd[1593]: time="2024-12-13T08:59:34.345298315Z" level=info msg="StartContainer for \"efa42bba020b6254ed89502a2fe7ae45dd6c235adeb01243e0dc682c3c1151c7\"" Dec 13 08:59:34.424504 containerd[1593]: time="2024-12-13T08:59:34.424422535Z" level=info msg="StartContainer for \"efa42bba020b6254ed89502a2fe7ae45dd6c235adeb01243e0dc682c3c1151c7\" returns successfully" Dec 13 08:59:35.110483 kubelet[2949]: I1213 08:59:35.108532 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:35.138926 kubelet[2949]: I1213 08:59:35.138879 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55775f8f-ncv49" podStartSLOduration=26.72860836 podStartE2EDuration="31.138131005s" podCreationTimestamp="2024-12-13 08:59:04 +0000 UTC" firstStartedPulling="2024-12-13 08:59:29.51490846 +0000 UTC m=+45.936219167" lastFinishedPulling="2024-12-13 08:59:33.924431105 +0000 UTC m=+50.345741812" observedRunningTime="2024-12-13 08:59:34.125211288 +0000 UTC m=+50.546522075" watchObservedRunningTime="2024-12-13 08:59:35.138131005 +0000 UTC m=+51.559441712" Dec 13 08:59:36.111809 kubelet[2949]: I1213 08:59:36.111432 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:36.253502 containerd[1593]: time="2024-12-13T08:59:36.252110603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 08:59:36.263170 containerd[1593]: time="2024-12-13T08:59:36.263121011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:36.266083 containerd[1593]: time="2024-12-13T08:59:36.265781422Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.947474504s" Dec 13 08:59:36.266083 containerd[1593]: time="2024-12-13T08:59:36.266087663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 08:59:36.272905 containerd[1593]: time="2024-12-13T08:59:36.266860427Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:36.272905 containerd[1593]: time="2024-12-13T08:59:36.267550390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 08:59:36.277149 containerd[1593]: time="2024-12-13T08:59:36.277097831Z" level=info msg="CreateContainer within sandbox \"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 08:59:36.361007 containerd[1593]: time="2024-12-13T08:59:36.360092389Z" level=info msg="CreateContainer within sandbox \"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c7fc4feebd1a41cdc307b5504b5fbfd009f9376e0c00685ba8dcc6113e192af1\"" Dec 13 08:59:36.364776 containerd[1593]: time="2024-12-13T08:59:36.364610168Z" level=info msg="StartContainer for \"c7fc4feebd1a41cdc307b5504b5fbfd009f9376e0c00685ba8dcc6113e192af1\"" Dec 13 08:59:36.475463 containerd[1593]: time="2024-12-13T08:59:36.475333686Z" level=info msg="StartContainer for \"c7fc4feebd1a41cdc307b5504b5fbfd009f9376e0c00685ba8dcc6113e192af1\" returns successfully" Dec 13 08:59:36.508289 kubelet[2949]: I1213 08:59:36.508251 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 08:59:36.543277 kubelet[2949]: I1213 08:59:36.543226 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-55775f8f-2kt75" podStartSLOduration=26.841548914 podStartE2EDuration="31.543171138s" podCreationTimestamp="2024-12-13 08:59:05 +0000 UTC" firstStartedPulling="2024-12-13 08:59:29.616298173 +0000 UTC m=+46.037608880" lastFinishedPulling="2024-12-13 08:59:34.317920397 +0000 UTC m=+50.739231104" observedRunningTime="2024-12-13 08:59:35.136642558 +0000 UTC m=+51.557953265" watchObservedRunningTime="2024-12-13 08:59:36.543171138 +0000 UTC m=+52.964481845" Dec 13 08:59:36.864139 kubelet[2949]: I1213 08:59:36.864071 2949 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 08:59:36.864139 kubelet[2949]: I1213 08:59:36.864128 2949 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 08:59:37.140755 kubelet[2949]: I1213 08:59:37.139455 2949 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4blkf" podStartSLOduration=24.407495603 podStartE2EDuration="32.13940763s" podCreationTimestamp="2024-12-13 08:59:05 +0000 UTC" firstStartedPulling="2024-12-13 08:59:28.540257583 +0000 UTC m=+44.961568250" lastFinishedPulling="2024-12-13 08:59:36.27216957 +0000 UTC m=+52.693480277" observedRunningTime="2024-12-13 08:59:37.137566262 +0000 UTC m=+53.558877009" watchObservedRunningTime="2024-12-13 08:59:37.13940763 +0000 UTC m=+53.560718337" Dec 13 08:59:43.731439 containerd[1593]: time="2024-12-13T08:59:43.731339745Z" level=info msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.784 [WARNING][5340] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebb43a46-8408-46f0-b3a8-196003d5a5b9", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0", Pod:"calico-apiserver-55775f8f-ncv49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd09b9c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.784 [INFO][5340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.784 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" iface="eth0" netns="" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.784 [INFO][5340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.784 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.807 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.807 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.807 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.822 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.822 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.825 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:43.833551 containerd[1593]: 2024-12-13 08:59:43.829 [INFO][5340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.833551 containerd[1593]: time="2024-12-13T08:59:43.833423429Z" level=info msg="TearDown network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" successfully" Dec 13 08:59:43.833551 containerd[1593]: time="2024-12-13T08:59:43.833450549Z" level=info msg="StopPodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" returns successfully" Dec 13 08:59:43.834923 containerd[1593]: time="2024-12-13T08:59:43.834446874Z" level=info msg="RemovePodSandbox for \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" Dec 13 08:59:43.834923 containerd[1593]: time="2024-12-13T08:59:43.834482634Z" level=info msg="Forcibly stopping sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\"" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.888 [WARNING][5366] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebb43a46-8408-46f0-b3a8-196003d5a5b9", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b482e61994cb96197be1bae51b25bfcad7781b4dca808de6e5da2a10e9cd6cc0", Pod:"calico-apiserver-55775f8f-ncv49", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dd09b9c85f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.888 [INFO][5366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.888 [INFO][5366] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" iface="eth0" netns="" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.888 [INFO][5366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.888 [INFO][5366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.915 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.915 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.915 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.931 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.931 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" HandleID="k8s-pod-network.8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--ncv49-eth0" Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.934 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:43.939519 containerd[1593]: 2024-12-13 08:59:43.937 [INFO][5366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf" Dec 13 08:59:43.940730 containerd[1593]: time="2024-12-13T08:59:43.939779291Z" level=info msg="TearDown network for sandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" successfully" Dec 13 08:59:43.947042 containerd[1593]: time="2024-12-13T08:59:43.946797442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:43.947042 containerd[1593]: time="2024-12-13T08:59:43.946901922Z" level=info msg="RemovePodSandbox \"8ebe8a031831ef20a796b2db9e54f4bfcf115ccec4d1a3330a11545ac1712fdf\" returns successfully" Dec 13 08:59:43.947821 containerd[1593]: time="2024-12-13T08:59:43.947529445Z" level=info msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:43.997 [WARNING][5390] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf58ab1-e4cb-4792-853d-f90274331666", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556", Pod:"calico-apiserver-55775f8f-2kt75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3ed80bd617", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:43.997 [INFO][5390] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:43.997 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" iface="eth0" netns="" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:43.997 [INFO][5390] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:43.997 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.027 [INFO][5397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.027 [INFO][5397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.027 [INFO][5397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.038 [WARNING][5397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.038 [INFO][5397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.042 [INFO][5397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.047086 containerd[1593]: 2024-12-13 08:59:44.045 [INFO][5390] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.048691 containerd[1593]: time="2024-12-13T08:59:44.047702321Z" level=info msg="TearDown network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" successfully" Dec 13 08:59:44.048691 containerd[1593]: time="2024-12-13T08:59:44.047752441Z" level=info msg="StopPodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" returns successfully" Dec 13 08:59:44.049271 containerd[1593]: time="2024-12-13T08:59:44.048889766Z" level=info msg="RemovePodSandbox for \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" Dec 13 08:59:44.049271 containerd[1593]: time="2024-12-13T08:59:44.048956246Z" level=info msg="Forcibly stopping sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\"" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.092 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0", GenerateName:"calico-apiserver-55775f8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ebf58ab1-e4cb-4792-853d-f90274331666", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"55775f8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"b2acccec2df907ec38fab10b5952c175d0a4c58549fd16bd52bd7ac2c1bc5556", Pod:"calico-apiserver-55775f8f-2kt75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif3ed80bd617", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.093 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.093 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" iface="eth0" netns="" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.093 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.093 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.136 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.137 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.137 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.148 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.148 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" HandleID="k8s-pod-network.0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--apiserver--55775f8f--2kt75-eth0" Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.151 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.156069 containerd[1593]: 2024-12-13 08:59:44.154 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0" Dec 13 08:59:44.157976 containerd[1593]: time="2024-12-13T08:59:44.156896916Z" level=info msg="TearDown network for sandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" successfully" Dec 13 08:59:44.161870 containerd[1593]: time="2024-12-13T08:59:44.161824137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:44.162097 containerd[1593]: time="2024-12-13T08:59:44.162076298Z" level=info msg="RemovePodSandbox \"0e6afc49edbf356127d65bfb8889e1aecee79f2be8071e1176dde2d3660d8aa0\" returns successfully" Dec 13 08:59:44.162930 containerd[1593]: time="2024-12-13T08:59:44.162890302Z" level=info msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.214 [WARNING][5440] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68d5070c-72c1-493f-9630-8955fb2d5362", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967", Pod:"csi-node-driver-4blkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7df9695e01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.214 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.214 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" iface="eth0" netns="" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.214 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.214 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.241 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.241 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.241 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.253 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.253 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.256 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.260974 containerd[1593]: 2024-12-13 08:59:44.258 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.261869 containerd[1593]: time="2024-12-13T08:59:44.261142369Z" level=info msg="TearDown network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" successfully" Dec 13 08:59:44.261869 containerd[1593]: time="2024-12-13T08:59:44.261182810Z" level=info msg="StopPodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" returns successfully" Dec 13 08:59:44.262276 containerd[1593]: time="2024-12-13T08:59:44.262179494Z" level=info msg="RemovePodSandbox for \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" Dec 13 08:59:44.262276 containerd[1593]: time="2024-12-13T08:59:44.262220374Z" level=info msg="Forcibly stopping sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\"" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.306 [WARNING][5465] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"68d5070c-72c1-493f-9630-8955fb2d5362", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"dfe4f401295671b62f63ba712f02bf7b0393ae78edef93420d2dd494ae3a6967", Pod:"csi-node-driver-4blkf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.75.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid7df9695e01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.306 [INFO][5465] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.306 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" iface="eth0" netns="" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.306 [INFO][5465] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.306 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.331 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.331 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.331 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.342 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.342 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" HandleID="k8s-pod-network.b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Workload="ci--4081--2--1--0--c10bd8c210-k8s-csi--node--driver--4blkf-eth0" Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.344 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.349197 containerd[1593]: 2024-12-13 08:59:44.346 [INFO][5465] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6" Dec 13 08:59:44.350540 containerd[1593]: time="2024-12-13T08:59:44.349315913Z" level=info msg="TearDown network for sandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" successfully" Dec 13 08:59:44.354448 containerd[1593]: time="2024-12-13T08:59:44.354145214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:44.354448 containerd[1593]: time="2024-12-13T08:59:44.354331855Z" level=info msg="RemovePodSandbox \"b9f1a22ddecf61e3046f3304749a8cd58c67b7e6f67cc666a7648c226f508de6\" returns successfully" Dec 13 08:59:44.355647 containerd[1593]: time="2024-12-13T08:59:44.355334819Z" level=info msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.403 [WARNING][5489] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0", GenerateName:"calico-kube-controllers-645f8cf8f-", Namespace:"calico-system", SelfLink:"", UID:"7dd748b5-18dc-47c1-b24f-5ff405335976", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f8cf8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c", Pod:"calico-kube-controllers-645f8cf8f-sbxh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ffc869158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.403 [INFO][5489] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.403 [INFO][5489] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" iface="eth0" netns="" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.403 [INFO][5489] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.403 [INFO][5489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.427 [INFO][5495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.428 [INFO][5495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.428 [INFO][5495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.441 [WARNING][5495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.441 [INFO][5495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.444 [INFO][5495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.447747 containerd[1593]: 2024-12-13 08:59:44.445 [INFO][5489] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.448628 containerd[1593]: time="2024-12-13T08:59:44.448336904Z" level=info msg="TearDown network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" successfully" Dec 13 08:59:44.448628 containerd[1593]: time="2024-12-13T08:59:44.448369584Z" level=info msg="StopPodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" returns successfully" Dec 13 08:59:44.449359 containerd[1593]: time="2024-12-13T08:59:44.449219748Z" level=info msg="RemovePodSandbox for \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" Dec 13 08:59:44.449359 containerd[1593]: time="2024-12-13T08:59:44.449258868Z" level=info msg="Forcibly stopping sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\"" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.495 [WARNING][5513] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0", GenerateName:"calico-kube-controllers-645f8cf8f-", Namespace:"calico-system", SelfLink:"", UID:"7dd748b5-18dc-47c1-b24f-5ff405335976", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 59, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645f8cf8f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"4175583aae43f4eadb6ac289dfc4da8978e7fd2cdbcae477dabcb8907aa5c29c", Pod:"calico-kube-controllers-645f8cf8f-sbxh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ffc869158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.496 [INFO][5513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.496 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" iface="eth0" netns="" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.496 [INFO][5513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.496 [INFO][5513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.519 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.519 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.519 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.529 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.529 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" HandleID="k8s-pod-network.c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Workload="ci--4081--2--1--0--c10bd8c210-k8s-calico--kube--controllers--645f8cf8f--sbxh2-eth0" Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.532 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.536650 containerd[1593]: 2024-12-13 08:59:44.534 [INFO][5513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a" Dec 13 08:59:44.537800 containerd[1593]: time="2024-12-13T08:59:44.537203571Z" level=info msg="TearDown network for sandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" successfully" Dec 13 08:59:44.543723 containerd[1593]: time="2024-12-13T08:59:44.543557358Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:44.543723 containerd[1593]: time="2024-12-13T08:59:44.543678159Z" level=info msg="RemovePodSandbox \"c01e2fab1976e1c4f3ac367fa2c0800ce67b9c8c563305fd692c747dd0fddc0a\" returns successfully" Dec 13 08:59:44.544616 containerd[1593]: time="2024-12-13T08:59:44.544491442Z" level=info msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.590 [WARNING][5538] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"194f2772-a0aa-4063-84c0-dbb7d890f78d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8", Pod:"coredns-76f75df574-ljb78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f64888e142", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.591 [INFO][5538] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.591 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" iface="eth0" netns="" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.591 [INFO][5538] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.591 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.616 [INFO][5544] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.616 [INFO][5544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.616 [INFO][5544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.629 [WARNING][5544] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.629 [INFO][5544] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.632 [INFO][5544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.639761 containerd[1593]: 2024-12-13 08:59:44.637 [INFO][5538] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.641760 containerd[1593]: time="2024-12-13T08:59:44.640223779Z" level=info msg="TearDown network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" successfully" Dec 13 08:59:44.641760 containerd[1593]: time="2024-12-13T08:59:44.640289179Z" level=info msg="StopPodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" returns successfully" Dec 13 08:59:44.642729 containerd[1593]: time="2024-12-13T08:59:44.642486989Z" level=info msg="RemovePodSandbox for \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" Dec 13 08:59:44.642729 containerd[1593]: time="2024-12-13T08:59:44.642555269Z" level=info msg="Forcibly stopping sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\"" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.712 [WARNING][5562] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"194f2772-a0aa-4063-84c0-dbb7d890f78d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"59213204266974d9a98cba46be936f1b3d9b3081d557b98171cb9c7b354d6cf8", Pod:"coredns-76f75df574-ljb78", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f64888e142", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.713 [INFO][5562] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.713 [INFO][5562] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" iface="eth0" netns="" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.713 [INFO][5562] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.713 [INFO][5562] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.743 [INFO][5588] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.743 [INFO][5588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.743 [INFO][5588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.756 [WARNING][5588] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.756 [INFO][5588] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" HandleID="k8s-pod-network.cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--ljb78-eth0" Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.758 [INFO][5588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.763494 containerd[1593]: 2024-12-13 08:59:44.761 [INFO][5562] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2" Dec 13 08:59:44.765806 containerd[1593]: time="2024-12-13T08:59:44.763829637Z" level=info msg="TearDown network for sandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" successfully" Dec 13 08:59:44.776818 containerd[1593]: time="2024-12-13T08:59:44.776433331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:44.776818 containerd[1593]: time="2024-12-13T08:59:44.776556572Z" level=info msg="RemovePodSandbox \"cf5028c1d1e8e01592bc37054413482de572d625ecd85ef54f13ca1d5f0245a2\" returns successfully" Dec 13 08:59:44.778135 containerd[1593]: time="2024-12-13T08:59:44.777689097Z" level=info msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.829 [WARNING][5607] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e0d6dbd-04fe-48c2-bc18-98799d274260", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a", Pod:"coredns-76f75df574-6sr77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21604a22406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.830 [INFO][5607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.830 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" iface="eth0" netns="" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.830 [INFO][5607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.830 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.853 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.853 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.853 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.870 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.871 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.877 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.881651 containerd[1593]: 2024-12-13 08:59:44.879 [INFO][5607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.882722 containerd[1593]: time="2024-12-13T08:59:44.882407472Z" level=info msg="TearDown network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" successfully" Dec 13 08:59:44.882722 containerd[1593]: time="2024-12-13T08:59:44.882445913Z" level=info msg="StopPodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" returns successfully" Dec 13 08:59:44.883106 containerd[1593]: time="2024-12-13T08:59:44.883000115Z" level=info msg="RemovePodSandbox for \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" Dec 13 08:59:44.883178 containerd[1593]: time="2024-12-13T08:59:44.883115676Z" level=info msg="Forcibly stopping sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\"" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.931 [WARNING][5631] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e0d6dbd-04fe-48c2-bc18-98799d274260", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 8, 58, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-1-0-c10bd8c210", ContainerID:"ae76143635a7750dbbcfccfe9ee97f84c3b14b60d7ee0fbb50e18ff20f05b73a", Pod:"coredns-76f75df574-6sr77", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21604a22406", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.931 [INFO][5631] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.932 [INFO][5631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" iface="eth0" netns="" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.932 [INFO][5631] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.932 [INFO][5631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.956 [INFO][5638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.956 [INFO][5638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.956 [INFO][5638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.970 [WARNING][5638] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.970 [INFO][5638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" HandleID="k8s-pod-network.0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Workload="ci--4081--2--1--0--c10bd8c210-k8s-coredns--76f75df574--6sr77-eth0" Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.973 [INFO][5638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 08:59:44.977320 containerd[1593]: 2024-12-13 08:59:44.975 [INFO][5631] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac" Dec 13 08:59:44.980203 containerd[1593]: time="2024-12-13T08:59:44.977281205Z" level=info msg="TearDown network for sandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" successfully" Dec 13 08:59:44.986218 containerd[1593]: time="2024-12-13T08:59:44.986152924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 08:59:44.986633 containerd[1593]: time="2024-12-13T08:59:44.986491285Z" level=info msg="RemovePodSandbox \"0f218e6c1a541400c97c31881ba6086a8322366010567263854d19f126daf6ac\" returns successfully" Dec 13 08:59:57.003221 kubelet[2949]: I1213 08:59:57.003182 2949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 09:00:45.539000 systemd[1]: run-containerd-runc-k8s.io-57dcd16c001825c817755dc1c277bb709895e6fce803150804d93adf04f9afc3-runc.MXzCEo.mount: Deactivated successfully. Dec 13 09:02:04.832704 systemd[1]: run-containerd-runc-k8s.io-1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727-runc.au026I.mount: Deactivated successfully. Dec 13 09:03:04.817880 systemd[1]: run-containerd-runc-k8s.io-1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727-runc.J3161h.mount: Deactivated successfully. Dec 13 09:03:35.815378 systemd[1]: Started sshd@7-138.199.144.99:22-139.178.89.65:34048.service - OpenSSH per-connection server daemon (139.178.89.65:34048). Dec 13 09:03:36.817599 sshd[6130]: Accepted publickey for core from 139.178.89.65 port 34048 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:36.819814 sshd[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:36.834372 systemd-logind[1564]: New session 8 of user core. Dec 13 09:03:36.840204 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 09:03:37.594381 sshd[6130]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:37.603094 systemd[1]: sshd@7-138.199.144.99:22-139.178.89.65:34048.service: Deactivated successfully. Dec 13 09:03:37.611479 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 09:03:37.613723 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Dec 13 09:03:37.615501 systemd-logind[1564]: Removed session 8. Dec 13 09:03:42.762399 systemd[1]: Started sshd@8-138.199.144.99:22-139.178.89.65:43424.service - OpenSSH per-connection server daemon (139.178.89.65:43424). Dec 13 09:03:43.750415 sshd[6145]: Accepted publickey for core from 139.178.89.65 port 43424 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:43.754476 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:43.777119 systemd-logind[1564]: New session 9 of user core. Dec 13 09:03:43.783449 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 09:03:44.508682 sshd[6145]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:44.512936 systemd[1]: sshd@8-138.199.144.99:22-139.178.89.65:43424.service: Deactivated successfully. Dec 13 09:03:44.518542 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 09:03:44.520059 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Dec 13 09:03:44.522695 systemd-logind[1564]: Removed session 9. Dec 13 09:03:44.683279 systemd[1]: Started sshd@9-138.199.144.99:22-139.178.89.65:43440.service - OpenSSH per-connection server daemon (139.178.89.65:43440). Dec 13 09:03:45.535819 systemd[1]: run-containerd-runc-k8s.io-57dcd16c001825c817755dc1c277bb709895e6fce803150804d93adf04f9afc3-runc.tQaBMV.mount: Deactivated successfully. Dec 13 09:03:45.685336 sshd[6179]: Accepted publickey for core from 139.178.89.65 port 43440 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:45.688996 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:45.695055 systemd-logind[1564]: New session 10 of user core. Dec 13 09:03:45.702565 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 09:03:46.489445 sshd[6179]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:46.494385 systemd[1]: sshd@9-138.199.144.99:22-139.178.89.65:43440.service: Deactivated successfully. Dec 13 09:03:46.500352 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Dec 13 09:03:46.500441 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 09:03:46.502971 systemd-logind[1564]: Removed session 10. Dec 13 09:03:46.654595 systemd[1]: Started sshd@10-138.199.144.99:22-139.178.89.65:43450.service - OpenSSH per-connection server daemon (139.178.89.65:43450). Dec 13 09:03:47.645892 sshd[6215]: Accepted publickey for core from 139.178.89.65 port 43450 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:47.648010 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:47.656204 systemd-logind[1564]: New session 11 of user core. Dec 13 09:03:47.661661 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 09:03:48.405062 sshd[6215]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:48.408898 systemd[1]: sshd@10-138.199.144.99:22-139.178.89.65:43450.service: Deactivated successfully. Dec 13 09:03:48.416240 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 09:03:48.416449 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Dec 13 09:03:48.419034 systemd-logind[1564]: Removed session 11. Dec 13 09:03:53.575661 systemd[1]: Started sshd@11-138.199.144.99:22-139.178.89.65:51114.service - OpenSSH per-connection server daemon (139.178.89.65:51114). Dec 13 09:03:54.580934 sshd[6233]: Accepted publickey for core from 139.178.89.65 port 51114 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:03:54.583944 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:03:54.590390 systemd-logind[1564]: New session 12 of user core. Dec 13 09:03:54.597529 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 09:03:55.347845 sshd[6233]: pam_unix(sshd:session): session closed for user core Dec 13 09:03:55.355784 systemd[1]: sshd@11-138.199.144.99:22-139.178.89.65:51114.service: Deactivated successfully. Dec 13 09:03:55.356259 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Dec 13 09:03:55.361848 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 09:03:55.363674 systemd-logind[1564]: Removed session 12. Dec 13 09:04:00.516730 systemd[1]: Started sshd@12-138.199.144.99:22-139.178.89.65:44260.service - OpenSSH per-connection server daemon (139.178.89.65:44260). Dec 13 09:04:01.496354 sshd[6249]: Accepted publickey for core from 139.178.89.65 port 44260 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:01.498795 sshd[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:01.505964 systemd-logind[1564]: New session 13 of user core. Dec 13 09:04:01.511414 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 09:04:02.274336 sshd[6249]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:02.280032 systemd[1]: sshd@12-138.199.144.99:22-139.178.89.65:44260.service: Deactivated successfully. Dec 13 09:04:02.286734 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 09:04:02.288361 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Dec 13 09:04:02.289699 systemd-logind[1564]: Removed session 13. Dec 13 09:04:04.831355 systemd[1]: run-containerd-runc-k8s.io-1af1af68183eae29669ff6732e91b5d2256e036cfc935b776271265cbdb75727-runc.NYKIJC.mount: Deactivated successfully. Dec 13 09:04:07.442376 systemd[1]: Started sshd@13-138.199.144.99:22-139.178.89.65:44270.service - OpenSSH per-connection server daemon (139.178.89.65:44270). Dec 13 09:04:08.029597 systemd[1]: Started sshd@14-138.199.144.99:22-116.120.97.94:39122.service - OpenSSH per-connection server daemon (116.120.97.94:39122). Dec 13 09:04:08.440786 sshd[6282]: Accepted publickey for core from 139.178.89.65 port 44270 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:08.442953 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:08.449596 systemd-logind[1564]: New session 14 of user core. Dec 13 09:04:08.453352 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 09:04:09.240987 sshd[6282]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:09.248275 systemd[1]: sshd@13-138.199.144.99:22-139.178.89.65:44270.service: Deactivated successfully. Dec 13 09:04:09.249377 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Dec 13 09:04:09.252077 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 09:04:09.253679 systemd-logind[1564]: Removed session 14. Dec 13 09:04:09.409477 systemd[1]: Started sshd@15-138.199.144.99:22-139.178.89.65:34904.service - OpenSSH per-connection server daemon (139.178.89.65:34904). Dec 13 09:04:10.401298 sshd[6310]: Accepted publickey for core from 139.178.89.65 port 34904 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:10.404111 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:10.411501 systemd-logind[1564]: New session 15 of user core. Dec 13 09:04:10.414350 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 09:04:11.295391 sshd[6310]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:11.302135 systemd[1]: sshd@15-138.199.144.99:22-139.178.89.65:34904.service: Deactivated successfully. Dec 13 09:04:11.306819 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 09:04:11.308772 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Dec 13 09:04:11.310042 systemd-logind[1564]: Removed session 15. Dec 13 09:04:11.459530 systemd[1]: Started sshd@16-138.199.144.99:22-139.178.89.65:34906.service - OpenSSH per-connection server daemon (139.178.89.65:34906). Dec 13 09:04:12.440681 sshd[6323]: Accepted publickey for core from 139.178.89.65 port 34906 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:12.442691 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:12.451453 systemd-logind[1564]: New session 16 of user core. Dec 13 09:04:12.457353 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 09:04:13.961237 sshd[6284]: maximum authentication attempts exceeded for root from 116.120.97.94 port 39122 ssh2 [preauth] Dec 13 09:04:13.961237 sshd[6284]: Disconnecting authenticating user root 116.120.97.94 port 39122: Too many authentication failures [preauth] Dec 13 09:04:13.964640 systemd[1]: sshd@14-138.199.144.99:22-116.120.97.94:39122.service: Deactivated successfully. Dec 13 09:04:14.433403 systemd[1]: Started sshd@17-138.199.144.99:22-116.120.97.94:39302.service - OpenSSH per-connection server daemon (116.120.97.94:39302). Dec 13 09:04:15.063191 sshd[6323]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:15.069424 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Dec 13 09:04:15.070356 systemd[1]: sshd@16-138.199.144.99:22-139.178.89.65:34906.service: Deactivated successfully. Dec 13 09:04:15.074605 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 09:04:15.076646 systemd-logind[1564]: Removed session 16. Dec 13 09:04:15.230492 systemd[1]: Started sshd@18-138.199.144.99:22-139.178.89.65:34922.service - OpenSSH per-connection server daemon (139.178.89.65:34922). Dec 13 09:04:16.216917 sshd[6368]: Accepted publickey for core from 139.178.89.65 port 34922 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:16.219009 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:16.225500 systemd-logind[1564]: New session 17 of user core. Dec 13 09:04:16.229727 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 09:04:17.122625 sshd[6368]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:17.127641 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Dec 13 09:04:17.127931 systemd[1]: sshd@18-138.199.144.99:22-139.178.89.65:34922.service: Deactivated successfully. Dec 13 09:04:17.134244 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 09:04:17.136495 systemd-logind[1564]: Removed session 17. Dec 13 09:04:17.289562 systemd[1]: Started sshd@19-138.199.144.99:22-139.178.89.65:34936.service - OpenSSH per-connection server daemon (139.178.89.65:34936). Dec 13 09:04:18.269095 sshd[6405]: Accepted publickey for core from 139.178.89.65 port 34936 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:18.270814 sshd[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:18.276694 systemd-logind[1564]: New session 18 of user core. Dec 13 09:04:18.282737 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 09:04:19.085333 sshd[6405]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:19.091283 systemd[1]: sshd@19-138.199.144.99:22-139.178.89.65:34936.service: Deactivated successfully. Dec 13 09:04:19.097250 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Dec 13 09:04:19.097928 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 09:04:19.101453 systemd-logind[1564]: Removed session 18. Dec 13 09:04:20.652332 sshd[6339]: maximum authentication attempts exceeded for root from 116.120.97.94 port 39302 ssh2 [preauth] Dec 13 09:04:20.652332 sshd[6339]: Disconnecting authenticating user root 116.120.97.94 port 39302: Too many authentication failures [preauth] Dec 13 09:04:20.656530 systemd[1]: sshd@17-138.199.144.99:22-116.120.97.94:39302.service: Deactivated successfully. Dec 13 09:04:21.297746 systemd[1]: Started sshd@20-138.199.144.99:22-116.120.97.94:39460.service - OpenSSH per-connection server daemon (116.120.97.94:39460). Dec 13 09:04:24.248382 systemd[1]: Started sshd@21-138.199.144.99:22-139.178.89.65:41868.service - OpenSSH per-connection server daemon (139.178.89.65:41868). Dec 13 09:04:25.237447 sshd[6427]: Accepted publickey for core from 139.178.89.65 port 41868 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:25.238720 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:25.243795 systemd-logind[1564]: New session 19 of user core. Dec 13 09:04:25.248452 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 09:04:25.991282 sshd[6427]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:25.998905 systemd[1]: sshd@21-138.199.144.99:22-139.178.89.65:41868.service: Deactivated successfully. Dec 13 09:04:26.007206 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 09:04:26.008951 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Dec 13 09:04:26.010347 systemd-logind[1564]: Removed session 19. Dec 13 09:04:27.555425 sshd[6425]: maximum authentication attempts exceeded for root from 116.120.97.94 port 39460 ssh2 [preauth] Dec 13 09:04:27.555425 sshd[6425]: Disconnecting authenticating user root 116.120.97.94 port 39460: Too many authentication failures [preauth] Dec 13 09:04:27.560597 systemd[1]: sshd@20-138.199.144.99:22-116.120.97.94:39460.service: Deactivated successfully. Dec 13 09:04:28.265591 systemd[1]: Started sshd@22-138.199.144.99:22-116.120.97.94:39622.service - OpenSSH per-connection server daemon (116.120.97.94:39622). Dec 13 09:04:31.158358 systemd[1]: Started sshd@23-138.199.144.99:22-139.178.89.65:35842.service - OpenSSH per-connection server daemon (139.178.89.65:35842). Dec 13 09:04:32.151435 sshd[6448]: Accepted publickey for core from 139.178.89.65 port 35842 ssh2: RSA SHA256:ptrNtAh5Wl7NWCXBdmMvlbP8mw8o0befcYpQmXzhrMU Dec 13 09:04:32.153686 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 09:04:32.160004 systemd-logind[1564]: New session 20 of user core. Dec 13 09:04:32.164455 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 09:04:32.909551 sshd[6448]: pam_unix(sshd:session): session closed for user core Dec 13 09:04:32.915678 systemd[1]: sshd@23-138.199.144.99:22-139.178.89.65:35842.service: Deactivated successfully. Dec 13 09:04:32.923474 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 09:04:32.926163 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Dec 13 09:04:32.928337 systemd-logind[1564]: Removed session 20. Dec 13 09:04:32.930882 sshd[6444]: Received disconnect from 116.120.97.94 port 39622:11: disconnected by user [preauth] Dec 13 09:04:32.930882 sshd[6444]: Disconnected from authenticating user root 116.120.97.94 port 39622 [preauth] Dec 13 09:04:32.935985 systemd[1]: sshd@22-138.199.144.99:22-116.120.97.94:39622.service: Deactivated successfully. Dec 13 09:04:33.265387 systemd[1]: Started sshd@24-138.199.144.99:22-116.120.97.94:39728.service - OpenSSH per-connection server daemon (116.120.97.94:39728). Dec 13 09:04:37.242892 sshd[6464]: Invalid user admin from 116.120.97.94 port 39728 Dec 13 09:04:38.865097 sshd[6464]: maximum authentication attempts exceeded for invalid user admin from 116.120.97.94 port 39728 ssh2 [preauth] Dec 13 09:04:38.865097 sshd[6464]: Disconnecting invalid user admin 116.120.97.94 port 39728: Too many authentication failures [preauth] Dec 13 09:04:38.868287 systemd[1]: sshd@24-138.199.144.99:22-116.120.97.94:39728.service: Deactivated successfully. Dec 13 09:04:39.694583 systemd[1]: Started sshd@25-138.199.144.99:22-116.120.97.94:39882.service - OpenSSH per-connection server daemon (116.120.97.94:39882). Dec 13 09:04:44.458254 sshd[6469]: Invalid user admin from 116.120.97.94 port 39882 Dec 13 09:04:46.467420 sshd[6469]: maximum authentication attempts exceeded for invalid user admin from 116.120.97.94 port 39882 ssh2 [preauth] Dec 13 09:04:46.467420 sshd[6469]: Disconnecting invalid user admin 116.120.97.94 port 39882: Too many authentication failures [preauth] Dec 13 09:04:46.470803 systemd[1]: sshd@25-138.199.144.99:22-116.120.97.94:39882.service: Deactivated successfully.